First experimental result on Revues.org corpus level 1 – Part IV
Part IV: Result analysis and conclusion
The overall result confirms that a learned CRF with our trial version of learning dataset gives reasonable estimation accuracy (85.34% in micro-averaged precision) on a test set. It is encouraging considering that this first version of learning data is very simple. For example we did not use any local, layout or external features that can be extracted from various attributes in tags.
When we look inside of the learning data, several meaningless tags chosen as labels, such as “lb”, “pb”, “ptr” and “emph”, occur very often inside of the other meaningful tags. These tags are verified to give low performance in terms of both precision and recall. Moreover as mentioned before, the precision and recall for the “hi” label, which seems not appropriate as label are also low. We may replace these labels with the other suitable tags.
There is also a problem of tokenization. For the evaluation of this time, we have used a set of already tokenized test data. However, a new reference to be labelled in real world is represented in general as a string. That is, we need to tokenize the reference before the automatic labelling using a learned CRF model.
To apply a learned CRF model for the estimation of a new reference, we need a same tokenization format with the learning data over the newly given reference. Currently, the Revues.org dataset is manually tokenized according to TEI guidelines and it is relatively detailed. For example, some special characters such as punctuations are separated as tokens but not all the punctuations are. That is, if a special character such as comma is contextually important, it is marked with a tag, but if not, the character is ignored and attached to its previous or next token.
In conclusion, we will step by step attack the previously found problems by preparing the updated versions of learning dataset. With this objective, our next experimental plans are as follows:
- Dataset ver. 2: Refine the quality of existing labels. From the multiple tags for a token, verity if the closest one is really the best one as label. If not, replace it with another tag, then rebuild a model.
- Dataset ver. 3: Verity if there are some attributes that are better for labels. Then replace the existing labels with them. And rebuild a model.
- Dataset ver. 4: Use the local, layout and lexicon features. Study the roles of existing attributes then pick the useful ones as features. Define carefully the characters of features referring to other researches. Then rebuild a model.
- Dataset ver. 5: Tokenize differently the learning data. Especially be careful for the treatment of the punctuations. Rebuild a model then test it on a newly tokenized test data. In this time, we should assume that we have no prior information on the tokenization of the test data. A new reference is entered as a string, and we automatically tokenize it using some predefined simple basis such as whitespaces and punctuations.
OpenEdition vous propose de citer ce billet de la manière suivante :
Young-Min Kim (31 mai 2011). First experimental result on Revues.org corpus level 1 – Part IV. OpenEdition Lab. Consulté le 22 mars 2025 à l’adresse https://doi.org/10.58079/qnp8