Catégorie : Building Bilbo
In our experiments, features are divided into three main groups: input feature, local feature and global feature. Input features indicate the words or punctuation in note string. Local features are the characteristics of input...
A text document for classification is basically represented by a set of word count-based features. The simplest one can be the word frequency feature. And the tf-idf (term frequency–inverse document frequency) weight computes each...
Recall that we have several problems for the treatment of the corpus level 2 that come from the nature of footnotes at the end of articles. In this posting, we show our approach to...
Our corpus level 2 consists of the references in the notes of articles. It also follows TEI guidelines as the corpus level 1, but needs another operation concerning extraction of reference part. That is,...
Our corpus follows the TEI guidelines. That is, its structure is not perfectly adapted to the task of reference field identification. Taking into account the reusing of Revues.org corpus, we decided to follow TEI...
This posting is about all our trials in order to acquire the most reasonable learning data in terms of tokenization, features and labels. We constructed about 20 different CRF models changing the way of...
Until now, the dataset of best performance is dataset ver. 2, in the second experiment with 87.86% of overall accuracy. Based on the know-how that we have acquired from the failures of the previous...
Part II : Experiments on various datasets with different strategies We prepared various datasets differently extracted from the original Revues.org corpus. The first and second rules that had been applied for the label selection on...
Part I : Attribute analysis In this and the next postings, we briefly report a number of experiments, which end in failure in terms of accuracy. From these failures, we could successfully move on...
In this experiment, we prepare a second version of dataset by modifying the first dataset, then learn and test a CRF model with this dataset. For the exact comparison with the previous experiment, we...
Part IV: Result analysis and conclusion The overall result confirms that a learned CRF with our trial version of learning dataset gives reasonable estimation accuracy (85.34% in micro-averaged precision) on a test set. It...
Part III : Evaluation (CRF on the Revues.org learning dataset ver. 1) After flattening the levels in tags, the tokens in a reference are represented as the left side of the following table. Then...
Part II: Data analysis Revues.org reference corpus is represented in xml format and manually tagged according to the TEI guidelines. An xml file corresponds to an article of Revues.org site and includes on average...
Part I: Introduction We present the first experimental result on Revues.org reference corpus. This corpus constitutes the first level of our reference data extracted from the site Revues.org. Objective here is to automatically label...