In a previous blog post, Denny and Kyle described how to train a classifier to isolate mentions of specific kinds of people, places, and things in free-text documents, a task known as Named Entity Recognition (NER). In general, tools such as Stanford CoreNLP can do a very good job of this for formal, well-edited text such as newspaper articles. However, a lot of the data that we need to process at HumanGeo comes from social media, in particular Twitter. Tweets are full of informal language, misspellings, abbreviations, hashtags, @-mentions, URLs, and unreliable capitalization and punctuation. Also, users can talk about anything and everything on Twitter, and new entities that were never or scarcely mentioned ever before may become suddenly popular. All these factors present huge challenges for general-purpose NER systems that were not designed for this type of text.Fortunately, there is a good deal of academic research on ways to make NER better for Twitter data. In fact, every year since 2015 there has been a shared task at the Workshop on Noisy User-generated Text (W-NUT) for Twitter NER. A shared task is a competition in which all the participants are asked to submit a program for a specific task and the entries are scored and ranked based on a common metric. So we already know which system is the best of those that participated, but we don't how good systems that didn't compete are, and even the best system is of no use to us if we can't get our hands on it. Unfortunately, none of the popular off-the-shelf NER tools have participated in this shared task, and I have only been able to find one entry, the seventh place winner from 2016, that is currently available on the internet.With this in mind, I decided to use the test data from the 2016 shared task to evaluate systems that you can actually download and start using today to see how well they perform on tweets. The general-purpose NER systems that I selected are Stanford CoreNLP, spaCy, NLTK, MITIE, and Polyglot. The two Twitter-specific systems that I selected are OSU Twitter NLP Tools and TwitterNER (the seventh place entry for 2016). Each of these systems uses a slightly different set of entity types, so I decided to map the types in the output of these systems to just PERSON, LOCATION, and ORGANIZATION, which were common to all of them. I just ignored any types that didn't fit match these three.The Stanford CoreNLP NER tool can be run with several options that could potentially improve accuracy for tweets. In particular, there is a part-of-speech (POS) tagger that is optimized for tweets. Since part of speech is one of the features used for NER, improving the POS tagger should also improve NER accuracy. Additionally, there are two options for dealing with text that has inconsistent capitalization. This is a big problem for NER systems because, at least in well-edited text, capitalization is one of the strongest clues that a word is part of a proper noun, and therefore is likely to be a named entity. Systems trained only on well-edited text therefore tend to rely on capitalization too strongly when applied to text with inconsistent capitalization. The first option is to preprocess the text with a truecaser, which attempts to automatically figure out what the correct capitalization of the text should be. The second option is to use models that simply ignore case altogether.Here is the precision, recall, and F1 score for these systems, sorted by highest F1 score first:System NamePrecisionRecallF1 ScoreStanford CoreNLP0.5266005410.4534161490.487275761Stanford CoreNLP (with Twitter POS tagger)0.5266005410.4534161490.487275761TwitterNER0.6614969660.3808229810.483370288OSU NLP0.5240963860.4052795030.45709282Stanford CoreNLP (with caseless models)0.5470779220.3924689440.457052441Stanford CoreNLP (with truecasing)0.4130848230.4215838510.417291066MITIE0.3229166670.4572981370.378534704spaCy0.2781400620.3808229810.321481239Polyglot0.2730806610.3272515530.297722055NLTK0.1490066230.3319099380.205677171Precision measures the fraction of the entities that the system came up with that were correct, whereas recall measures the fraction of the correct entities that the system was able to find. F1 score is the harmonic mean of these two numbers. Which of these numbers is most important to you will depend on how you plan to use NER. For example, if the output of the NER system is always reviewed by a human, you might prefer a high recall/low precision system over a low recall/high precision system. In this case, the human reviewers can always toss out any bad entities that the system outputs, but if the system simply doesn't report entities at all, the reviewers will never see them. On the other hand, if something important happens automatically to all of the entities that the system outputs, you might prefer the low recall/high precision system, so that any entities that the system outputs are as likely to be as correct as possible. All other things being equal, if you just want one number to look at, you should use F1 score.Out of the box, Stanford CoreNLP is the winner, as measured by F1 score, though TwitterNER has a much higher precision. It is interesting to note that none of the alternative configurations for Stanford CoreNLP resulted in any improvement. The improved POS tagger didn't change the results at all for any of the entity types I examined (though it did change the results for some other entity types), indicating that POS tagging plays a relatively minor role. Truecasing and caseless models made things even worse. My guess is that the truecaser is probably creating more capitalization errors than it is fixing, and the drop from the caseless models probably means that the capitalization information, as unreliable as it is, is still useful overall.Given that they were designed explicitly for Twitter, it is somewhat surprising that TwitterNER and OSU Twitter NLP Tools did not get the highest F1 scores, but they were trained on a fairly small amount of data compared to the general-purpose systems, even if the quality of the data was better for this task.One improvement that can easily be made to all of the systems is to exclude all detected entities that are @-mentions. These do refer to accounts, which correspond to either a person or an organization, so it would be natural to categorize them as entities. However, they are not marked as entities in the test data, since they are easy to identify with nearly 100% accuracy with a regular expression, and account profile information is likely to be a better source for distinguishing between people and organizations than the text of the tweet. Here are the results for all systems with @-mention entities excluded:System NamePrecisionRecallF1 ScoreStanford CoreNLP0.5268380690.4534161490.487377425Stanford CoreNLP (with Twitter POS tagger)0.5268380690.4534161490.487377425TwitterNER0.6614969660.3808229810.483370288OSU NLP0.5240963860.4052795030.45709282Stanford CoreNLP (with caseless models)0.5470779220.3924689440.457052441Stanford CoreNLP (with truecasing)0.4130848230.4215838510.417291066MITIE0.3403640570.4572981370.390260063spaCy0.284265430.3808229810.325535092Polyglot0.2730806610.3272515530.297722055NLTK0.1490066230.3319099380.205677171The improvement is only significant for MITIE and spaCy, but, as expected, no scores went down, so it's still worth doing.Since TwitterNER can be easily retrained, let's see if we can make it better. The W-NUT Twitter NER shared task includes a set of training data that all participants are required to use, and if they use any additional training data it's considered cheating. From a research perspective this is a really good idea, because this way, you know that the winner won because it was the best algorithm, not just because it used the most training data. But if you want the best system, you want to throw as much training data as you can at it. Fortunately, there are at least three more sets of tweets annotated for named entities available on the internet: * A set of manually-created annotations (Hege) * A set of crowdsourced annotations (Finin) * The Twitter section of the 2017 W-NUT test data (W-NUT 2017)One of the challenges with using data from other sources is that there can be some inconsistency in formatting that you have to be careful about. I cleaned up the following issues from this data: * The W-NUT 2017 data incorrectly splits hashtags and @-mentions into two tokens (e.g. "@" and "username" rather than "@username"). I re-joined them. * All three of these sources annotate @-mentions as Person entities. I removed the Person entity annotations for all @-mentions. * All URLs and numbers are replaced with "URL" and "NUMBER", respectively. The reason for this is that it reduces data sparsity without sacrificing too much information, since it usually doesn't matter what the URL or number is exactly for the purpose of doing NER, and there is literally an infinite number of them. But TwitterNER has specialized features for numbers and URLs that expect numbers to look like numbers and URLs to look like URLs. So I replaced all "NUMBER" tokens with "1" and all "URL" tokens with "http://url.com".Here are the results if you train a TwitterNER with this data in addition to the shared task training data:System NamePrecisionRecallF1 ScoreTwitterNER (with Hege training data)0.6572133170.4138198760.507860886TwitterNER (with W-NUT 2017 training data)0.6753078420.4045031060.505948046TwitterNER (with Finin training data)0.5980861240.3881987580.470809793After adding either the Hege or the W-NUT 2017 data, TwitterNER now has the highest F1 score of all of the systems, though adding the Finin data actually decreases the F1 score. This is likely due to the fact that the quality of the Finin annotations is not the best since they were crowdsourced rather than being produced by a smaller number of well-trained annotators like the other datasets. If we combine just the W-NUT 2017 and Hege data, w