2019 DS/ML digest 09

2019 DS/ML digest 09

Posted by snakers41 on April 22, 2019

Buy me a coffeeBuy me a coffee

SO annual survey

Stack Overflow anual survey as usual:

  • Python fastest growing and second most loved;
  • DevOps highest paid;
  • ~10% programmers have mental problems;
  • PyTorch - most loved DL framework;
  • ~50% of developers … use Windows as a daily driver;
  • Data Scientists and researchers … are most active in job search;
  • Annual median DS job US$62k (non US);

NLP

Multi-language word alignments;
Embedding bag now has from_pretrained and weighted option https://github.com/pytorch/pytorch/issues/4068;

Speech-to-text (STT)

wav2vec: Unsupervised Pre-training for Speech Recognition:

  • Self-supervised task - predict future samples from a given signal context;
  • Achieves state-of-the-art performance on TIMIT;
  • Encoder network + context network;
  • Negative mining with contrastive loss;
  • When trained, the output of context network can be used instead of log-filterbanks;
  • Evaluation is done … on very small datasets;
  • Looks a bit dodgy to me;

PyTorch DataParallel scalability

Link one and two.
In practice with proper optimization on 2-4 GPUs plain DataParallel achieves near linear scalability.
Growing to 4+ GPUs requires DDP.

Posts

Cool papers

  • Essentially RetinaNet + semseg in one network for dense objects. Also some more old school approaches;
  • Detail-Preserving Pooling in Deep Networks;
  • MorphNet:
    • Optimize existing architectures;
    • Shrink and expand the network;
    • Targeted optimization, i.e. you optimize some property of the network (i.e. FLOPS, size);
    • They could cut 10-15% FLOPs for InceptionV2;
  • A network to estimate optical flow;