2019 DS/ML digest 10

2019 DS/ML digest 10

Posted by snakers41 on May 14, 2019


PyTorch 1.1 release

  • Tensorboard loggers (beta);
  • DistributedDataParallel new functionality and tutorials;
  • Multi-headed attention block;
  • EmbeddingBag enhancements: from pretrained and trainable weights;
  • Other cool, but more niche features:
    • nn.SyncBatchNorm
    • optim.lr_scheduler.CyclicLR


Apply image augmentations to spectrogram;


A set of wrappers to load all of the latest trendy huge models;
Word2Vec visualized. An article from the amazing “Illustrated X” article series;

An idea from fast.ai:

  • Crappify an image. Train a UNET to fix it;
  • It converges quickly;
  • Replace with some generative loss / GAN;
  • Other ideas - self-attention, pre-trained UNET;
  • NoGAN training:
    • Pretrain the Generator;
    • Save Generated Images From Pretrained Generator;
    • Pretrain the Critic as a Binary Classifier;
    • Train as GAN;
  • This also stabilizes videos;
  • Nice notes on ML bias;
  • Sparse Transformers from OpenAI;
  • New epoch in CPU design?
  • Transformers can also learn music;
  • Google intends … to kill SMB calls market?;
  • Controversial topics about medical AI:
No medical advance is going to be achieved by a team who has designed a fancy new model for the task.


  • Google landmarks 2019 - 5m images, 700k landmarks;
  • Open images v5 - now it boasts semseg data as well;
  • Russian Open Speech To Text (STT/ASR) Dataset;


Cool paper about automated augmentations. Key idea - sample unlabeled images, corrput them, use KLD loss to enforce that the model produces the same label.