Parsing Common Crawl in 4 plain scripts in python (not 2)
TLDR
After starting the CC mini-project in our last post, we ran into several challenges, all of which we more or less resolved (or avoided altogether).
In the end, the full pipeline looks like (see detailed explanations below) this:
python3 parse_cc_index.py
python3 save_cc_indexes.py
python3 prepare_wet_indexes.py
python3 process_wet_files.py
New challenges:
- The amount of data turned out ~10-20x larger than expected;
- The structure of indexes - looks like the CC index is alphabetically ordered, but the domains in the WET / WARC files are not (see some charts below), i.e. the data in question is more or less “uniformly” distributed across the WET files (i.e. there are no huge WET blobs with 100% of Russian sites);
- Concurrency / download speed;
- CPU-bound pre-processing - turned out that the pre-processing turned out to be ~4x more resource-intensive;
How we solved them:
- Well, since such corpus is to be used in combination with other corpuses, there is no need to download terabytes of text, I guess ~100-200 GB of text will suffice;
- This is just how CC works, but this can be mitigated by processing the most “important” WET files first;
- Just order the fattest download speed you can with your ISP and load files in as many threads as you can. Funnily enough, in Moscow this is not a bottleneck at all. Just look the tarriffs available - 500 Mbit/s connections are available for essentially USD10-15 per month (and you would pay USD5-10 for your Internet anyway - so this is not a real sunk cost);
- This is a tougher one - you need a lot of CPU cores. But in my case the output of my pipeline was around ~30-50 GB of text per day on 50% of my home machine (~ 3 physical cores of my Intel® Core™ i7-6800K CPU @ 3.40GHz), which is sufficient, I guess. I underestimated the post-processing, but I underestimated the amount of data even more (texts compress well);
As usual new scripts are available here.
What the end results looks like
Yes, you can drop the url / domain / tld column.
But disk space is essentially free nowadays )
url | domain | tld | sentence |
---|---|---|---|
https://priem.s-vfu.ru/priemnaya-kampaniya-201… | s-vfu | ru | 43.03.01 Сервис (Сервис в индустрии моды и кра… |
https://www.japancar.ru/gd/T1732772.html | japancar | ru | Прямую ссылку на страницы сайта, которые содер… |
http://blackannis.org/rock-5232.html | blackannis | org | Он был найден в качестве алебастра, декоративн… |
https://bykvu.com/bukvy/91424-minzdrav-predlag… | bykvu | com | Самые важные и свежие новости - вы узнаете пер… |
http://metkarkasnn.ru/tables/podstoliya/karkas… | metkarkasnn | ru | Оборудование для тату салона |
http://limpopo.com.ru/komplekt-v-krovatku-5-pr… | com | ru | Развивающие и интерактивные |
https://costagarant.com/news-1472642410/ | costagarant | com | Написать нам сообщение |
https://roof-facade.com/catalog/product/d_foli… | roof-facade | com | Панельные ограждения |
https://habr.com/post/354774/ | habr | com | Мне вот интересно, а почему на некоторых играх… |
https://kino.otzyv.ru/film/Пле%… | otzyv | ru | Дней до премьеры: |
http://sharepix.ru/dzhenerik-ledihep-ot-gepane… | sharepix | ru | Здоровье |
http://iknigi.net/avtor-anna-oduvalova/160271-… | iknigi | net | Пожалуйста, подождите… |
http://www.kaskad-electro.ru/magazin/product/s… | kaskad-electro | ru | Главная \ Интернет-магазин \ Светодиодные комм… |
https://libertycity.ru/files/gta-4/5152-nypd-p… | libertycity | ru | Новый, совершенно новый кар-пак.Особенности: -… |
http://www.debri-dv.com/m/post_comment/6027/all | debri-dv | com | В 2006 г. проект «Дебри-ДВ» был создан как эле… |
http://omsk.vselennaya-shluh.com/prostitutka-e… | vselennaya-shluh | com | Магнитогорск |
http://topwiz.ru/companies/produkty-0?region=336 | topwiz | ru | Покровское |
http://flexer.ru/stat/sect.php?r=71 | flexer | ru | www.cdstudio.ru 574 Аудио- и видеопродукция, н… |
http://www.avtovzglyad.ru/author/olga-grekova/ | avtovzglyad | ru | Столица возвращается к радиальной системе план… |
https://habr.com/post/354774/ | habr | com | Использованные инструменты |
https://mir-auto.net/category/byd-f3/offset120… | mir-auto | net | Петрово |
http://chel-oblsud.ru/index.php?html=inspectio… | chel-oblsud | ru | Непроцессуальные обращения в суд |
https://forum.aing.ru/viewtopic.php?f=5&t=1394… | aing | ru | На пару секунд задумался: «Не я же инициатор в… |
http://www.skiff-impex.ru/index.php?categoryID… | skiff-impex | ru | На главную |
http://moselservis.ru/katalog/internet-magazin… | moselservis | ru | Мы в социальных сетях |
https://ovaciya-krasnodar.ru/the-news/315-goro… | ovaciya-krasnodar | ru | • 27 - "Зиме поем мы песни. |
http://tipslife.ru/56227-в-ла%D… | tipslife | ru | К счастью, обошлось без серьезных последствий. |
http://barnaul.msboy.ru/catalog/motornye_masla… | msboy | ru | Витебск |
http://upsape.ru/rerayting-onlayn/ | upsape | ru | Copyright 2018 , Как стать копирайтером? |
https://priem.s-vfu.ru/priemnaya-kampaniya-201… | s-vfu | ru | 21.05.04 Горное дело (Открытые горные работы; … |
You can see that there are a lot of short texts from website navigation, but they are easily filtered by length.
Sentence length distribution on a small sample
I have no intention whatsoever to check in scientifically, but also my rough guess is that 1-5% websites are dedication to pornography / prostitution (just a guess by looking at some random data samples).
Some more insights into the CC structure
This is table showing how many unique URLs with Russian texts we had in our final table contained in how many unique WET files in which top level domain zone. You can see that all popular domain zones are distributed evenly across all of the 71,520 WET files.
tld | url count | wet dcount |
---|---|---|
com | 12,899,615 | 71,520 |
ru | 7,888,988 | 71,520 |
net | 3,832,065 | 71,520 |
info | 3,295,949 | 71,520 |
by | 3,061,347 | 71,520 |
org | 2,557,377 | 71,520 |
kz | 1,415,778 | 71,520 |
biz | 592,528 | 71,494 |
pro | 469,202 | 71,374 |
me | 451,564 | 71,162 |
club | 245,570 | 68,796 |
pl | 186,901 | 65,671 |
online | 157,605 | 62,649 |
lv | 147,173 | 61,112 |
cc | 140,585 | 59,818 |
name | 137,457 | 59,939 |
az | 134,906 | 56,795 |
md | 128,719 | 58,751 |
eu | 127,226 | 56,783 |
kg | 104,835 | 53,391 |
This is a distribution of the number of times a given WET file was contained in the above URL list
We can see that there are several distinct peaks, but obviously you should start your CC crawl processing with the more frequent files. My guess is that they will contain much more relevant data, i.e. there is some underlying logic to the way the WET/WARC files are ordered domain-zone wise, but this is not apparent to me now.
Overall pipeline walk-through
python3 parse_cc_index.py # (1)
python3 save_cc_indexes.py # (2)
python3 prepare_wet_indexes.py # (3)
python3 process_wet_files.py # (4)
- The first script downloads all of the 299 CC indexes and filters our the urls by language;
- (2) just takes these 299 files and saves in 10 feather files for convenience;
- (3) calculates 2 things - a set of unique urls for later use and a list of urls with WET files;
- (4) just goes through WET files one by one (in a multi-processed fashion of course), downloads them, parses and cleans them, divides sentences and save the results into feather files;
Further optimization, processing speed
Obviously it will differ hugely depending on your Internet connection and CPU power, but in our case:
- Downloading the whole index took approximately 1 week with average speed around 300-500 kb/s. It was done in background in the office with no hurry;
- Downloading and processing ~1000 WET files (with largest Russian content ratio) took ~15 hours and produced ~25-30
GB
of texts on 3 physical cores of my Intel® Core™ i7-6800K CPU @ 3.40GHz. So I guess it is safe to say that 1 day roughly equals to a 40-50 GB corpus on half of my home PC. Also for this task - the bandwidth was almost a non-issue in my case;
How the script can be improved:
- Download files in advance and / or add a some queue to collect the data and a separate queue to post-process the data. This will help you utilize your bandwidth and CPU power at 100% of its capacity always. I personally decided not to invest in this as it is easier just to wait a bit, also my CPU will not just suddenly grow in size;
References
- Previous articles in this mini-series:
- Parsing Wikipedia in 4 plain commands in python;
- Previous article about parsing the Common crawl;
- A gist with the scripts;
- A list of useful Common Crawl starter links:
- http://commoncrawl.org/connect/blog/
- http://commoncrawl.org/2018/03/index-to-warc-files-and-urls-in-columnar-format/
- https://www.slideshare.net/RobertMeusel/mining-a-large-web-corpus
- Getting links to WET files;
- Reanimated python3 WARC file library;