First experimental result on Revues.org corpus level 1 – Part III
Part III : Evaluation (CRF on the Revues.org learning dataset ver. 1)
After flattening the levels in tags, the tokens in a reference are represented as the left side of the following table. Then we construct the first version of learning dataset by picking each token as the input and its tag as the output label that is shown in the right side. Some closest tags are not good for labels but for features and some attributes are good for tags but not for attributes. But in this trial version of dataset, we do not count in detail this kind of problems.
Source data | Learning data ver. 1 |
<token> ++ <attributes> ++ <tags>
COPANS ++ ++ author surname , ++ ++ author Jean ++ init ++ author forename , ++ comma ++ c 1995 ++ ++ edition date , ++ comma ++ c « ++ guillemot_left ++ c Entrepreneurs ++ a ++ title et ++ a ++ title entreprises ++ a ++ title …… |
<token> <label>
COPANS surname , author Jean forename , c 1995 date , c « c Entrepreneurs title et title entreprises title …… |
Experimental Setting
From 737 references in xml format, several erroneous references are eliminated then we get 716 references. Among those, 500 randomly chosen instances corresponding about 70% of total references are used as learning data and the rest are used as test data. There are 31 labels in the learning set including <nolabel> which indicates token has no label: surname, namelink, forename, c, date, title, nolabel, abbr, biblscope, pubplace, publisher, hi, author, edition, extent, distributor, meeting, orgname, camera, emph, pb, name, sponsor, settlement, country, ref, genname, editor, ptr, region, lb.
Even at a glance, the tags <pb>, <ptr> and <lb> seem not suitable as the labels for the input tokens. Moreover, the tag <hi> which marks typographies would be better if it is replaced by its attribute. And it occurs very often inside of the tag <title>, which seems more suitable as the label. But for the meantime, we put them aside to launch our first experiment.
Measures
We conduct the truth-based evaluation, which means that we compare the estimated labels of test references with the true labels of them. As the measures, we used the micro-averaged precision, which computes the global accuracy of the estimated result, and also the precision and the recall of each type of labels. For the micro-averaged precision, we count all the correctly estimated tokens regardless of the type of labels and divide it by the total number of estimated tokens. The precision of a type of label is the proportion of the correctly estimated tokens in all the tokens estimated as the label. The recall of a type of label is the proportion of the correctly estimated tokens in all the tokens having originally the label.
The experimental result is as follows:
Total accuracy (Micro Averaged Precision) : 85.34% (5315/6228 * 100)
***** Precision *****
Label # tokens correctly labeled # estimated tokens # Precision
surname | 276 | 324 | 85.1851851852 |
edition | 13 | 20 | 65.0 |
forename | 266 | 302 | 88.0794701987 |
distributor | 54 | 71 | 76.0563380282 |
biblscope | 86 | 126 | 68.253968254 |
settlement | 1 | 1 | 100.0 |
lb | 11 | 19 | 57.8947368421 |
author | 116 | 133 | 87.2180451128 |
orgname | 17 | 38 | 44.7368421053 |
pb | 0 | 2 | 0.0 |
editor | 10 | 16 | 62.5 |
meeting | 54 | 84 | 64.2857142857 |
namelink | 3 | 3 | 100.0 |
hi | 43 | 96 | 44.7916666667 |
abbr | 120 | 124 | 96.7741935484 |
extent | 28 | 28 | 100.0 |
date | 230 | 269 | 85.5018587361 |
publisher | 184 | 223 | 82.5112107623 |
c | 1433 | 1497 | 95.7247828991 |
nolabel | 54 | 76 | 71.0526315789 |
title | 2175 | 2607 | 83.4292289988 |
pubplace | 141 | 169 | 83.4319526627 |
***** Recall *****
Label # tokens correctly labeled # original tokens # Recall
surname | 276 | 312 | 88.4615384615 |
edition | 13 | 59 | 22.0338983051 |
sponsor | 0 | 11 | 0.0 |
forename | 266 | 319 | 83.3855799373 |
distributor | 54 | 89 | 60.6741573034 |
biblscope | 86 | 138 | 62.3188405797 |
settlement | 1 | 8 | 12.5 |
lb | 11 | 26 | 42.3076923077 |
author | 116 | 125 | 92.8 |
orgname | 17 | 41 | 41.4634146341 |
genname | 0 | 1 | 0.0 |
pb | 0 | 13 | 0.0 |
editor | 10 | 25 | 40.0 |
meeting | 54 | 55 | 98.1818181818 |
namelink | 3 | 5 | 60.0 |
hi | 43 | 265 | 16.2264150943 |
abbr | 120 | 151 | 79.4701986755 |
extent | 28 | 31 | 90.3225806452 |
date | 230 | 256 | 89.84375 |
region | 0 | 1 | 0.0 |
publisher | 184 | 263 | 69.9619771863 |
c | 1433 | 1478 | 96.9553450609 |
ref | 0 | 5 | 0.0 |
name | 0 | 1 | 0.0 |
country | 0 | 8 | 0.0 |
nolabel | 54 | 137 | 39.4160583942 |
title | 2175 | 2251 | 96.6237227899 |
pubplace | 141 | 154 | 91.5584415584 |
OpenEdition vous propose de citer ce billet de la manière suivante :
Young-Min Kim (31 mai 2011). First experimental result on Revues.org corpus level 1 – Part III. OpenEdition Lab. Consulté le 13 décembre 2024 à l’adresse https://doi.org/10.58079/qnp7