2018 DS/ML digest 29

2018 DS/ML digest 29

Posted by snakers41 on November 15, 2018

Buy me a coffeeBuy me a coffee

Become a PatronBecome a Patron


Datasets

  • Open Images v4. The biggest CV dataset EVER. I played with v2 or v3 - it was very … random and low quality at times;

Articles

  • GANS + hentai genitals;
  • Understanding optimization in deep learning by analyzing trajectories of gradient descent;
  • Cats + RPI + mobile networks;
  • Cleaning texts for OCR with UNET;
  • Online speaker diarization by Google;
  • A review of attention mechanisms
  • Deep Latent-Variable Models for Natural Language
  • Looks like PyTorch is updating their distributed docs, but little info on NCCL;

NLP:

  • A move to start caring about explaining LSTMs, no real ideas though;
  • Statistical MT with PyTorch;
  • There is even a book about Deep encoders decoders for NLP;

Auto-Encoding Dictionary Definitions into Consistent Word Embeddings

  • FAIR, link;
  • Looks like a cool and novel embedding generation technique;
  • “Coffee” is related to “cup” as coffee is a beverage often drunk in a cup, but “coffee” is not similar to “cup” in that coffee is a beverage and cup is a container;
  • Key:
    • Model learns to compute word embeddings by processing dictionary definition;
    • Dictionaries are very common in almost any language;
    • Goal is to obtain better rep- resentations for more languages with less effort;
  • Desirable for future natural language understanding systems;
  • The model consists of a definition autoencoder: an LSTM processes the definition of a word to yield its corresponding word embedding;
  • Model should be able to recover the definition from the embedding;

RL:

Hardware

  • NvLink ML benchmarks - 2x2080Ti - only + 20%-25% vs. 2x1080Ti;

Competitions

  • New Kaggle quora toxic comment competition - kernels only - meh;

TDS picks