Looks like a cool and novel embedding generation technique;
“Coffee” is related to “cup” as coffee is a beverage often drunk in a cup, but “coffee” is not similar to “cup” in that coffee is a beverage and cup is a container;
Key:
Model learns to compute word embeddings by processing dictionary definition;
Dictionaries are very common in almost any language;
Goal is to obtain better rep- resentations for more languages with less effort;
Desirable for future natural language understanding systems;
The model consists of a definition autoencoder: an LSTM processes the definition of a word to yield its corresponding word embedding;
Model should be able to recover the definition from the embedding;