[R] Stanford AI Lab Blog – Bootleg: Chasing the Tail with Self-Supervised Named Entity Disambiguation

Solarwind, Fireeye, Microsoft and Cisco leaks are offered for sale
January 14, 2021
Meet Au-Spot, the AI robot dog that's training to explore caves on Mars
January 14, 2021

[R] Stanford AI Lab Blog – Bootleg: Chasing the Tail with Self-Supervised Named Entity Disambiguation

New blog post from SAIL on Identifying rare people, places, and things (entities) in text is a critical problem in AI, especially as the majority of entities are rare. We address this problem in our self-supervised system Bootleg. Bootleg learns to reason over entity types and relations for improved rare entity performance.

Blog Post: http://ai.stanford.edu/blog/bootleg/

Authors: Laurel Orr, Megan Leszczynski, Simran Arora, Neel Guha, Xiao Ling, Sen Wu, and Chris Ré

Paper: https://arxiv.org/pdf/2010.10363.pdf

Named entity disambiguation (NED) is the process of mapping “strings” to “things” in a knowledge base. You have likely already used a system that requires NED multiple times today. Every time you ask a question to your personal assistant or issue a search query on your favorite browser, these systems use NED to understand what people, places, and things (entities) are being talked about.

NED gets more interesting when we examine the full spectrum of entities shown above, specifically the more rare tail and unseen entities. These are entities that occur infrequently or not at all in data. Performance over the tail is critical because the majority of entities are rare. In Wikidata, only 13% of entities even have Wikipedia pages as a source of textual information.

Prior approaches to NED use BERT-based systems to memorize textual patterns associated with an entity (e.g., Abraham Lincoln is associated with “president”). As shown above, the SotA BERT-based baseline from Févry does a great job at memorizing patterns over popular entities (it achieves 86 F1 points over all entities). For the rare entities, it does much worse (58 F1 points lower on the tail). One possible solution to better tail performance is to simply train over more data, but this would likely require training over data 1,500x the size of Wikipedia for the model to achieve 60 F1 points over all entities!

In this blog post, we present Bootleg, a self-supervised approach to NED that is better able to handle rare entities.


submitted by /u/regalalgorithm
[link] [comments]


Comments are closed.