Is it possible to predict the language of short texts?
In a previous post, we evaluated and compared three libraries for automatic language detection, all using the Bayesian probabilistic approach and n-gram features. We chose Language detection of Cybozu Labs for its efficiency and effectiveness according to evaluation results on the parallel corpus EuroParl.
In this post, we will evaluate the impact of text length on the effectiveness of language detection. In addition, we will present an experimental evaluation of Language detection on a corpus of bibliographic references that is composed of relatively short text samples.
1. Analyzing the relationship between text length and accuracy
The main goal of this experimental study is to evaluate the performance of Language detection on text samples that vary in size. In fact, the impact of text length is an important factor in evaluating such tools especially in tasks like the one we will present in section 2.
1.1. Experimental details
In order to evaluate the impact of text length on detecting its language, we used the corpus Europarl after concatenating texts in each of the six languages (de, en, es, fr, it, pt). Then, for each treated length, we extract textual samples from the resulting concatenation using a sliding window with a predefined size and evaluate Language detection on them. Note that we consider words as entities and do not respect the exact count of characters during sampling. In other words, even if the window covers the expected length at the middle of a word, we extract additional characters until the end of this word.
Let us take the phrase “Thank you Mr. President”, using a five characters window size we obtain the following samples: “Thank”, “You Mr”, and “President”. Note that the last two samples count more than 5 characters for the same reason we mentioned before.
During experiments the size of sampling window varied from 5 to 95 characters with a 5 characters step between two successive experiments. The variation in the number of resulting samples is listed in Table 1 for sizes 5, 45 and 95.
Table 1. Number of examples for three length values in corpus
1.2. Results
According to the results illustrated in Figure 1 we believe that there is a logarithmic relationship between the effectiveness of language detection and the length of treated text. In fact, we note first that there is a significant growth in F1-measure that accompanies the increase in window size from 5 to 45 characters. This growth becomes slight and insignificant afterwards. Second, Language detection guarantees a F1-Measure of 90% on a 15 characters long text. These results confirm that this library is capable of detecting languages on short text which is considered a challenging task for the relatively small number of features that can be extracted from short texts.
Figure 1. Relationship between the effectiveness of language detection and the length of text samples
Next section presents experiments on bibliographic references, relatively short texts, extracted from different platforms of OpenEdition.
2. Experiments on a bibliographic corpus
In these experiments, the task of Language detection is to detect the language of bibliographic references extracted from OpenEdition platforms. The main goal of these experiments is to evaluate if Language detection has the capability to detect languages of bibliographic references. This detection helps in further text mining processing such as bibliographic fields annotation using Bilbo.
2.1. Experimental details
The bibliographic corpus is composed of 1000 references in each of the 6 languages: de, en, es, fr, it, pt, that were randomly extracted from references in OpenEdition. Each reference is annotated with the language of the main title of the reference. The average length of the references varied from 114 to 139 for Portuguese and French correspondingly (see Table 2). According to previous experiments, this number of characters is sufficient for Language detection to predict language with a high precision (>90%). Note that bibliographic references are not only short texts but also contain information such as authors or editors names, locations and dates which makes language detection a challenging task.
Table 2. Average length of bibliographic references in the corpus
Here is an example of a bibliographic reference from our corpus:
<bibl>BOSERUP E., 1965, <hi rend=”italic”>The conditions of Agricultural Growth: The Economics of Agrarian Change under Population Pressure</hi>, Aldine, Chicago, 218 p.</bibl>
The language of this reference is English as the title of the reference is in English.
2.2. Results
According to results illustrated in Figure 2, the precision varied from 92.52% for English to 98.06% for Portuguese. As for recall, it varied from 95.01% for German to 98.19% for English.
Figure 2. Results of experiments on language detection as tested on a bibliographic corpus
The preceding observations are related to values in red in Table 3. Most errors in annotating German references occur when the detector assigns English instead of German. This has an impact on the precision of English and the recall of German.
Table 3. Confusion matrix of language detection on bibliographic corpus
Here is an example of an English reference that was attributed French language. A simple error in typing the word environnement led to a wrong annotation:
“Heileman S. (éd.) (2005). Caribbean environnement Outlook UNEP”
Another example for an author with a French name (with “é” which is a discriminant character for non-English text) and a title in English that led to detecting French as a major language:
“Zérah Marie-Hélène (2000) Water Unreliable Supply in Delhi Delhi: Manohar.”
A last example of a translated reference, for this case the major language was French and English was less probable:
“Dover Kenneth J.(1978) Greek Homosexuality Londres Duckworth traduit par Suzanne Saïd(1982) Homosexualité grecque Grenoble La Pensée sauvage.”
3. Conclusion
In this post, we presented two experiments using Language detection. The first one evaluated this library on samples of text with varying length in order to estimate the impact and the correlation between text length and the effectiveness of language detection. In the second group of experiments, we evaluated the library on a corpus of bibliographic references extracted from OpenEdition.
According to results, we conclude that Language detection can guarantee a 90% of F1-Measure on short texts. In addition, these results were confirmed on a complex task aiming at annotating bibliographic references with their major language(s).
As for future works, we are interested in multilingual texts. The challenge here is not only to detect languages in text but also to segment text according to these languages.
OpenEdition vous propose de citer ce billet de la manière suivante :
Shereen ALBITAR (16 avril 2014). Is it possible to predict the language of short texts? OpenEdition Lab. Consulté le 22 mars 2025 à l’adresse https://doi.org/10.58079/qnqn
Hello !
Very interesting article thanks! Perhaps you should filter the input prior to language classification… for instance only keep language-varying infotypes like “title” or “journal name” ?
Romain
Hello!
Thanks for your interest and for your comment!
In fact, in order to identify and annotate these infotypes, we use an automatic annotation software known by “Bilbo” that depends on prior knowledge on text language.
This is the reason why we need to predict the language of the reference before identifying its parts.
Shereen