2019 DS/ML digest 14

2019 DS/ML digest 14

Posted by snakers41 on August 21, 2019


  • Acoustic event detection - quantization, distillation, separable covolutions to make models smaller and faster;
  • Surprise surprise - tuning a large model for speakers with speech impairments improves quality;
  • Google’s state-of-the-art speaker duarization;
  • How CMU Sphinx works;
  • A very cool and from the likes of it - working paper - real time voice cloning;


  • NLP trends from ACL 2019;
  • Yet another better BERT;
  • The Illustrated GPT-2 (Visualizing Transformer Language Models);
  • Facebook finally embraces FastText for misspellings;
  • Key idea - use pre-trained transformers for distillation;
  • Add robustness to - NMT by adding adversarial examples to train data;
  • Transformer with 8B parameters;
  • State of transfer learning in NLP - kind of meh article;
  • OpenAI huge GPT follow-up;

ML / market

Cool libraries / papers / etc

  • Inplace BatchNorm - memory usage reduction for CV;
    QRNN explanation;
  • Yet another LSTM replacement - SRU. Similar to QRRN - it requires additional dependencies;
  • Cycle consistency for repeating actions in video;
  • Dealing with artifacts in medical CV - just train a model to filter them out;
  • Useful DS sampling algorithms;
  • Wavefunction collapse algorithm;

Python / coding

Cool random stuff

  • Automated fly brain slicing with CNNs;
  • Idea - hardware-encoded limitations for embedded ML devices;
  • Lottery ticket hypothesis revisited;
  • Advances in conversational AI by - FAIR:
    • Key problem - dialogue consistency;
    • “Сreated a new NLI data set called Dialogue NLI, which is used to both improve and evaluate the consistency of dialogue models”;