We introduce a new test of how well language models capture meaning in
children’s books. Unlike standard language modelling benchmarks, it
distinguishes the task of predicting syntactic function words from that of
predicting lower-frequency words, which carry greater semantic content. We
compare a range of state-of-the-art models, each with a different way of
encoding what has been previously read. We show that models which store
explicit representations of long-term contexts outperform state-of-the-art
neural language models at predicting semantic content words, although this
advantage is not observed for syntactic function words. Interestingly, we find
that the amount of text encoded in a single memory representation is highly
influential to the performance: there is a sweet-spot, not too big and not too
small, between single words and full sentences that allows the most meaningful
information in a text to be effectively retained and recalled. Further, the
attention over such window-based memories can be trained effectively through
self-supervision. We then assess the generality of this principle by applying
it to the CNN QA benchmark, which involves identifying named entities in
paraphrased summaries of news articles, and achieve state-of-the-art
performance.

Source: http://bppro.link/?c=QWy

About

No Comments

Be the first to start a conversation

Leave a Reply

Your email address will not be published.

3 × 2 =