Catégorie : Bilbo – Bibliographical Robot

0

Proper noun features III (corpus 1)

We extract seven different learning sets according to the defined strategies in the previous post. Several important fields are selected for the comparison including the surname, forename and place fields that are our main...

0

Proper noun features II (corpus 1)

In the previous experiments on proper noun lists, certain author names were omitted because of parsing error. We deduce from this inadvertent mistake and the degradation of performance that we would have an improved...

0

Manual annotation revision (corpus 1)

Excluding any errors in manual annotation is hard to be achieved especially when the annotation structure is complex and data size is not small as ours. But the quality of manual annotation is one...

0

Finding DOI through CrossRef

One of the main services that we have planed using automatic annotation is to provide a unique identifier such as DOI (Digital Object Identifier) to each reference. DOI is assigned by CrossRef, the registration...

0

Note annotation on Corpus level 2

We continue our experiments with the corpus 2. In the previous post, we showed that a well-defined set of three different feature types improves note classification accuracy. Now we verify the usefulness of note...

0

Proper noun features on corpus level 1

This part of experiments constitutes the use of external proper noun lists. To overcome the miss-annotation between people name and place, we think of using a set of proper noun lists. People name and...

0

Feature generation for note classification

In our experiments, features are divided into three main groups: input feature, local feature and global feature. Input features indicate the words or punctuation in note string. Local features are the characteristics of input...