- Published on
Compressing Word Embeddings
- Authors
- Name
- Martin Andrews
- @mdda123
This paper was accepted to The 23rd International Conference on Neural Information Processing (ICONIP 2016) in Kyoto, Japan.
Abstract
Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic. However, these vector space representations (created through large-scale text analysis) are typically stored verbatim, since their internal structure is opaque. Using word-analogy tests to monitor the level of detail stored in compressed re-representations of the same vector space, the trade-offs between the reduction in memory usage and expressiveness are investigated. A simple scheme is outlined that can reduce the memory footprint of a state-of-the-art embedding by a factor of 10, with only minimal impact on performance. Then, using the same 'bit budget', a binary (approximate) factorisation of the same space is also explored, with the aim of creating an equivalent representation with better interpretability.
Poster Version
Link to Paper
- Direct link to paper PDF on author website - (as allowed by Springer copyright rules)
And the Springer BiBTeX
entry:
@Inbook{Andrews2016-CompressingWordEmbeddings,
author="Andrews, Martin",
editor="Hirose, Akira and Ozawa, Seiichi and Doya, Kenji
and Ikeda, Kazushi and Lee, Minho and Liu, Derong",
title="Compressing Word Embeddings",
bookTitle="Neural Information Processing:
23rd International Conference, ICONIP 2016, Kyoto, Japan,
October 16--21, 2016, Proceedings, Part IV",
year="2016",
publisher="Springer International Publishing",
address="Cham",
pages="413--422",
isbn="978-3-319-46681-1",
doi="10.1007/978-3-319-46681-1_50",
url="http://dx.doi.org/10.1007/978-3-319-46681-1_50"
}