source: gs3-extensions/maori-lang-detection/MoreReading/CommonCrawl.txt@ 33425

Last change on this file since 33425 was 33425, checked in by ak19, 3 years ago

A few more links now that I got past getting the vagrant VM with spark and hadoop working.

File size: 21.1 KB
Line 
1sudo apt-get install maven
2(or sudo apt update
3sudo apt install maven)
4git clone https://github.com/commoncrawl/cc-index-table.git
5cd cc-index-table
6mvn package
7vagrant@node1:~/cc-index-table$ ./src/script/convert_url_index.sh https://commoncrawl.s3.amazonaws.com/cc-index/collections/CC-MAIN-2019-30/indexes/cdx-00000.gz hdfs:///user/vagrant/cc-index-table
8
9
10
11
12spark:
13https://jaceklaskowski.gitbooks.io/mastering-apache-spark/spark-shell.html
14
15============
16Dr Bainbridge found the following vagrant file that will set up hadoop and spark presumably for cluster computing:
17
18https://github.com/martinprobson/vagrant-hadoop-hive-spark
19
20Vagrant:
21 * Guide: https://www.vagrantup.com/intro/getting-started/index.html
22 * Common cmds: https://blog.ipswitch.com/5-vagrant-commands-you-need-to-know
23 * vagrant reload = vagrant halt + vagrant up https://www.vagrantup.com/docs/cli/reload.html
24 * https://stackoverflow.com/questions/46903623/how-to-use-firefox-ui-in-vagrant-box
25 * https://stackoverflow.com/questions/22651399/how-to-install-firefox-in-precise64-vagrant-box
26 sudo apt-get -y install firefox
27 * vagrant install emacs: https://medium.com/@AnnaJS15/getting-started-with-virtualbox-and-vagrant-8d98aa271d2a
28
29 * hadoop conf: sudo vi /usr/local/hadoop-2.7.6/etc/hadoop/mapred-site.xml
30 * https://data-flair.training/forums/topic/mkdir-cannot-create-directory-data-name-node-is-in-safe-mode/
31---
32==> node1: Forwarding ports...
33 node1: 8080 (guest) => 8081 (host) (adapter 1)
34 node1: 8088 (guest) => 8089 (host) (adapter 1)
35 node1: 9083 (guest) => 9084 (host) (adapter 1)
36 node1: 4040 (guest) => 4041 (host) (adapter 1)
37 node1: 18888 (guest) => 18889 (host) (adapter 1)
38 node1: 16010 (guest) => 16011 (host) (adapter 1)
39 node1: 22 (guest) => 2200 (host) (adapter 1)
40==> node1: Running 'pre-boot' VM customizations...
41
42
43==> node1: Checking for guest additions in VM...
44 node1: The guest additions on this VM do not match the installed version of
45 node1: VirtualBox! In most cases this is fine, but in rare cases it can
46 node1: prevent things such as shared folders from working properly. If you see
47 node1: shared folder errors, please make sure the guest additions within the
48 node1: virtual machine match the version of VirtualBox you have installed on
49 node1: your host and reload your VM.
50 node1:
51 node1: Guest Additions Version: 5.1.38
52 node1: VirtualBox Version: 5.2
53
54------------
55
56At http://commoncrawl.org/2018/10/september-2018-crawl-archive-now-available/, it says
57"The September crawl contains 500 million new URLs, not contained in any crawl archive before. New URLs stem from
58
59 the continued seed donation of URLs from mixnode.com
60 ..."
61
62https://www.mixnode.com/
63"The entire web, in your hands
64
65Mixnode turns the web into a database that you can run queries against. Say goodbye to web crawling, forget about web scraping, never run a spider again: get all the web data that you need using simple SQL queries."
66
67--------------
68https://commoncrawl.github.io/cc-crawl-statistics/plots/languages
69http://commoncrawl.org/2018/08/august-2018-crawl-archive-now-available/
70
71 The JSON for the index files (that we downloaded for .nz) already contained a "languages:" field. The above page mentions that this shows the primary, upto 3, detected languages of the document.
72
73"Language Annotations
74
75We now run the Compact Language Detector 2 (CLD2) on HTML pages to identify the language of a document. CLD2 is able to identify 160 different languages and up to 3 languages per document. The detected languages resp. the ISO-639-3 code are shown in the URL index as a new field, e.g., "languages": "zho,eng". The WARC metadata records contain the full CLD2 response including scores and text coverage:
76
77languages-cld2: {"reliable":true,"text-bytes":3783,"languages":[{"code":"zh","code-iso-639-3":"zho","text-covered":0.93,"score":1943.0,"name":"Chinese"},{"code":"en","code-iso-639-3":"eng","text-covered":0.05,"score":523.0,"name":"ENGLISH"}]}
78
79On github you’ll find the Java bindings to the CLD2 native library and the distribution of the primary document languages as part of our crawl statistics.
80
81Please note that the columnar index does not contain the detected languages for now. "
82
83
84http://commoncrawl.org/2018/10/september-2018-crawl-archive-now-available/
85"the columnar index contains the content language of a web page as a new field. Please read the instructions below how to upgrade your tools to read newly added fields."
86
87http://commoncrawl.org/2018/03/index-to-warc-files-and-urls-in-columnar-format/
88
89SPARK (Spark SQL): https://github.com/commoncrawl/cc-index-table
90 with example on selecting languages
91https://commoncrawl.s3.amazonaws.com/cc-index/collections/CC-MAIN-2019-30/indexes/cluster.idx
92
93./convert_url_index.sh https://commoncrawl.s3.amazonaws.com/cc-index/collections/CC-MAIN-2019-30/indexes/cdx-00000.gz hdfs:///user/vagrant/cc-index-table
94---
95
96https://www.aclweb.org/anthology/L16-1443 (2016, as per https://pbn.nauka.gov.pl/sedno-webapp/getReport/38108)
97
98https://dkpro.github.io/dkpro-c4corpus/
99"DKPro C4CorpusTools is a collection of tools for processing CommonCrawl corpus, including Creative Commons license detection, boilerplate removal, language detection, and near-duplicate removal."
100
101https://zoidberg.ukp.informatik.tu-darmstadt.de/jenkins/job/DKPro%20C4Corpus/org.dkpro.c4corpus$dkpro-c4corpus-doc/doclinks/1/#_including_c4corpustools_in_your_java_projects
102- Including C4CorpusTools in your Java projects
103- Working with C4Corpus - Word count example
104
105https://github.com/farhansiddiqui/webscale_nlp
106
107https://github.com/commoncrawl/language-detection-cld2
108---------
109There's already python code for getting text:
110
111https://spark-in.me/post/parsing-common-crawl-in-two-simple-commands
112https://gist.github.com/Smerity/afe7430fdb4371015466
113
114https://spark-in.me/post/parsing-common-crawl-in-two-simple-commands
115
116"But it turns out - it is not. This can be attributed to the effort that has been made to make the CC more accessible. The killer feature for me was the presence of their index weighting only ~200Gb, that also features a language detection option, i.e. you do not need to analyze top-level-domains or do any significant data mining."
117
118What does the "language detection option" discussion above mean?
119
120------------
121Skipping CrawlDiagnostics (see below) and robots.txt gz files:
122
123http://commoncrawl.org/2018/08/august-2018-crawl-archive-now-available/
124
125"HTTP 304 notmodified" responses are now stored as WARC revisit records in the "crawldiagnostics" subset along with 404s, redirects and other non-200 responses. For now the revisit records contain a payload digest although there is no payload sent together with HTTP 304 responses. The stupid reason is that the columnar index requires the digest field and we want to make sure that all tools continue to work as expected. The SHA-1 digest of an empty payload (zero bytes) is used for the revisit records.
126
127http://iipc.github.io/warc-specifications/specifications/warc-format/warc-1.1/#revisit
128‘revisit’
129General
130
131A ‘revisit’ record describes the revisitation of content already archived, and might include only an abbreviated content body which has to be interpreted relative to a previous record. Most typically, a ‘revisit’ record is used instead of a ‘response’ or ‘resource’ record to indicate that the content visited was either a complete or substantial duplicate of material previously archived.
132...
133
134-------
135
136WET FILES:
137
138https://stackoverflow.com/questions/16649535/access-a-common-crawl-aws-public-dataset/25297965#25297965
139
140
141http://commoncrawl.org/2019/07/june-2019-crawl-archive-now-available/
142 File List #Files Total Size Compressed (TiB)
143 WET files CC-MAIN-2019-26/wet.paths.gz 56000 7.59
144
145
146http://commoncrawl.org/2015/04/announcing-the-common-crawl-index/
147(Instructions)
148
149https://gist.github.com/svemir/4207353
150(Hadoop related) A Common Crawl Experiment
151
152https://gist.github.com/Smerity/afe7430fdb4371015466
153
154 Extract just the text from Common Crawl WARC WET files
155
156https://stackoverflow.com/tags/common-crawl/hot?filter=all
157
158https://stackoverflow.com/questions/45920527/get-offset-and-length-of-a-subset-of-a-wat-archive-from-common-crawl-index-serve/46152773#46152773
159
160
161"The Common Crawl index does not contain offsets into WAT and WET files. So, the only way is to search the whole WAT/WET file for the desired record/URL. Eventually, it would be possible to estimate the offset because the record order in WARC and WAT/WET files is the same."
162
163https://dmorgan.info/posts/common-crawl-python/
164https://groups.google.com/forum/#!topic/common-crawl/pdI3w09AAbQ
165
166Example:
167WARC:
168tikauka:[142]/Scratch/anupama/maori-lang-detection>wget https://commoncrawl.s3.amazonaws.com/crawl-data/CC-MAIN-2019-30/segments/1563195526237.47/crawldiagnostics/CC-MAIN-20190719115720-20190719141720-00077.warc.gz
169WET:
170tikauka:[142]/Scratch/anupama/maori-lang-detection>wget https://commoncrawl.s3.amazonaws.com/crawl-data/CC-MAIN-2019-30/segments/1563195526237.47/wet/CC-MAIN-20190719115720-20190719141720-00508.warc.wet.gz
171tikauka:[142]/Scratch/anupama/maori-lang-detection>gunzip CC-MAIN-20190719115720-20190719141720-00508.warc.wet.gz
172
173
174--------------------------------------------
175http://webdatacommons.org/
176
177https://dzone.com/articles/need-billions-of-web-pages-dont-bother-crawling
178
179 Ran Geva 2017-04-09
180 Like (0)
181
182 Excellent article! CommonCrawl is an amazing resourouce. You should also check out webdatacommons.org that is using their data and extract structured data (using RDFa, Microdata..)
183
184 If I may add a shameless plug here and tell you about Webhose.io [PAYWARE/SERVICES]. We provide an API to structured web data. The idea is the same as the one you presented. Instead of crawling the web, we already crawl millions of sites, download the data, structure and organize it so anyone can easily consume it and plug into their own system.
185 Reply
186
187
188https://stackoverflow.com/questions/12097848/finding-all-domains-of-a-country
189
190 -> http://urlsearch.commoncrawl.org/
191 -> http://index.commoncrawl.org/
192 -> INSTRUCTIONS: https://groups.google.com/forum/#!msg/common-crawl/3QmQjFA_3y4/vTbhGqIBBQAJ
193
194
195Go to: http://index.commoncrawl.org/
196Grab the newest gzipped archive file.
197Then open it and find the cluster.idx file listed in it.
198Copy its relative URL, prefix with https://commoncrawl.s3.amazonaws.com/
199
200THEN:
201
202wharariki:[101]/Scratch/ak19/heritrix/heritrix-3.4.0-SNAPSHOT>wget https://commoncrawl.s3.amazonaws.com/cc-index/collections/CC-MAIN-2019-26/indexes/cluster.idx
203 --2019-07-29 17:40:45-- https://commoncrawl.s3.amazonaws.com/cc-index/collections/CC-MAIN-2019-26/indexes/cluster.idx
204 Resolving commoncrawl.s3.amazonaws.com (commoncrawl.s3.amazonaws.com)... 52.216.8.171
205 Connecting to commoncrawl.s3.amazonaws.com (commoncrawl.s3.amazonaws.com)|52.216.8.171|:443... connected.
206 HTTP request sent, awaiting response... 200 OK
207 Length: 125059234 (119M) [binary/octet-stream]
208 Saving to: ‘cluster.idx’
209
210 cluster.idx 100%[============================================================>] 119.27M 8.51MB/s in 15s
211
212 2019-07-29 17:41:01 (7.83 MB/s) - ‘cluster.idx’ saved [125059234/125059234]
213
214wharariki:[102]/Scratch/ak19/heritrix/heritrix-3.4.0-SNAPSHOT>grep '^nz,' cluster.idx | cut -f2 | uniq
215cdx-00237.gz
216cdx-00238.gz
217
218Prefix "https://commoncrawl.s3.amazonaws.com/cc-index/collections/CC-MAIN-2019-26/indexes/" to the listed gz files and wget them:
219
220 https://commoncrawl.s3.amazonaws.com/cc-index/collections/CC-MAIN-2019-26/indexes/cdx-00237.gz
221 https://commoncrawl.s3.amazonaws.com/cc-index/collections/CC-MAIN-2019-26/indexes/cc-index/collections/CC-MAIN-2019-26/indexes/cdx-00238.gz
222
223
224
225Unzip those, and we have all URLs with TLD .nz:
226 wharariki:[131]/Scratch/ak19/heritrix/heritrix-3.4.0-SNAPSHOT>gunzip cdx-00237.gz
227 wharariki:[132]/Scratch/ak19/heritrix/heritrix-3.4.0-SNAPSHOT>gunzip cdx-00238.gz
228
229
230The first of these files includes Norwegian TLDs (start with "no,") and the second gz file includes TLDs that start with "org,".
231So extract just those that start with "^nz," [https://www.unix.com/shell-programming-and-scripting/176608-how-copy-lines-starts-either-3-4-into-new-file.html].
232
233 wharariki:[107]/Scratch/ak19/heritrix/heritrix-3.4.0-SNAPSHOT>egrep "^nz," cdx-00238 > nz-only-TLDs-from-237-238.txt
234 wharariki:[108]/Scratch/ak19/heritrix/heritrix-3.4.0-SNAPSHOT>egrep "^nz," cdx-00238 >> nz-only-TLDs-from-237-238.txt
235
236
237Checking the abacusinstitute.ac.nz is also in the current June 2019 list:
238 egrep "ac,abacusinstitute" nz-only-TLDs-from-237-238.txt
239
240
241OTHER:
242https://www.tutorialspoint.com/hadoop/hadoop_mapreduce
243http://stormcrawler.net/
244http://storm.apache.org/getting-help.html
245
246
247https://dzone.com/articles/need-billions-of-web-pages-dont-bother-crawling
248Basically, each release is split into 100 segments. Each segment has three types of files: WARC, WAT, and WET. As explained on the Get Started page:
249
250 WARC files store the raw crawl data.
251 WAT files store computed metadata for the data stored in the WARC.
252 WET files store extracted plaintext from the data stored in the WARC.
253
254Note that WAT and WET are in the WARC format too! In fact, the WARC format is nothing more than an envelope with metadata and content. In the case of the WARC files, that content is the HTTP requests and responses, whereas, for the WET files, it is simply the plain text extracted from the WARCs. The WAT files contain a JSON representation of metadata extracted from the WARCs, e.g. title, links etc.
255
256
257Resources
258
259The Get Started page on the CommonCrawl website contains useful pointers to libraries and code in various programming languages to process the datasets. There is also a list of tutorials and presentations.
260
261It is also worth noting that CommonCrawl provides an index per release, allowing you to search for URLs (including wildcards) and retrieve the segment and offset therein where the content of the URL is stored, e.g.:
262
263 { "urlkey": "org,apache)/", "timestamp": "20170220105827", "status": "200", "url": "http://apache.org/", "filename": "crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00206-ip-10-171-10-108.ec2.internal.warc.gz", "length": "13315", "mime": "text/html", "offset": "14131184", "digest": "KJREISJSKKGH6UX5FXGW46KROTC6MBEM" }
264
265
266This is useful but only if you are interested in a limited number of URLs which you know in advance. In many cases, what you know in advance is what you want to extract, not where it will be extracted from. For situations such as these, you will need distributed batch-processing using MapReduce in Apache Hadoop or Apache Spark.
267
268
269https://www.forbes.com/sites/kalevleetaru/2017/09/28/common-crawl-and-unlocking-web-archives-for-research/#7067d4313b83
270One large web archive has bucked this trend and stood alone among its peers: Common Crawl. Similar to other large web archiving initiatives like the Internet Archive, Common Crawl conducts regular web wide crawls of the open web and preserves all of the content it downloads in the standard WARC file format. Unlike many other archives, it focuses primarily on preserving HTML web pages and does not archive images, videos, JavaScript files, CSS stylesheets, etc. Its goal is not to preserve the exact look and feel of a website on a given snapshot in time, but rather to collect a vast cross section of HTML web pages from across the web in a single place to enable large-scale data mining at web scale.
271...
272The project excludes sites which have robots.txt exclusion policies, following the historical policy of many other web archives, though it is worth noting that the Internet Archive earlier this year began slowly phasing out its reliance on such files due to their detrimental effect on preservation completeness. Common Crawl also allows sites to request removal from their index. Other than these cases, Common Crawl attempts to crawl as much of the remaining web as possible, aiming for a representative sample of the open web.
273...
274Ms. Crouse [Director of Common Crawl] noted the risk adverse nature of the web archiving community as a whole (historically many adhered and still adhere to a strict “opt in” policy requiring prior approval before crawling a site) and the unwillingness of many archives to modernize their thinking on copyright and to engage more closely with the legal community in ways that could help them expand fair use horizons. In particular, she noted “since we [in the US] are beholden to the Copyright Act, while living in a digital age, many well-intentioned organizations devoted to web science, archiving, and information provision may benefit from a stronger understanding of how copyright is interpreted in present day, and its hard boundaries” and that “many talented legal advisers and groups are interested in the precedent-setting nature of this topic; some are willing to work Pro Bono.”
275...
276Returning to the difference between Common Crawl’s datasets and traditional preservation-focused web archiving, Ms. Crouse emphasized that they capture only HTML pages and exclude multimedia content like images, video and other dynamic content.
277
278She noted that a key aspect of their approach to fair use is that web pages are intended for consumption by human beings one at a time using a web browser, while Common Crawl concatenates billions of pages together in the specialized WARC file format designed for machine data mining. Specifically, “Common Crawl does not offer separate/individual web pages for easy consumption. The three data formats that are provided include text, metadata, and raw data, and the data is concatenated” and “the format of the output is not a downloaded web page. The output is in WARC file format which contains the components of a page that are beneficial to machine-level analysis and make for space- efficient archiving (essentially: header, text, and some metadata).”
279
280As Ms. Crouse put it, “this is big data intended for machine learning/readability. Further, our intention for its use is for public benefit i.e. to encourage research and innovation, not direct consumption.” She noted that “from the layperson’s perspective, it is not at all trivial at present to extract a specific website’s content (that is, text) from a Common Crawl dataset. This task generally requires one to know how to install and run a Hadoop cluster, among other things. This is not structured data. Further it is likely that not all pages of that website will be included (depending on the parameters for depth set for the specific crawl).” This means that “the bulk of [Common Crawl’s] users are from the noncommercial, educational, and research sectors. At a higher level, it’s important to note that we provide a broad and representative sample of the web, in the form of web crawl data, each month. No one really knows how big the web is, and at present, we limit our monthly data publication to approximately 3 billion pages.”
281
282
283Common Crawl believes it addresses this through the fact that its archive represents only a sample of each website crawled, rather than striving for 100% coverage. Specifically, Ms. Crouse noted that “at present, [crawls are] in monthly increments that are discontinuous month-to-month. We do only what is reasonable, necessary, and economical to achieve a representative sample. For instance, we limit the number of pages crawled from any given domain so, for large content owners, it is highly probable that their content, if included in a certain month’s crawl data, is not wholly represented and thus not ideal for mining for comprehensive results 
 if the content owner is not a large site, or in a niche market, their URL is less likely to be included in the seeds in the frontier, and, since we limit depth (# of links followed) for the sake of both economy and broader representative web coverage, 'niche' content may not even appear in a given month’s dataset.”
284
285To put it another way, Common Crawl’s mission is to create a “representative sample” of the web at large by crawling a sampling of pages and limiting the number of pages from each site they capture. Thus, their capture of any given site will represent a discontinuous sampling of pages that can change from month to month. A researcher wishing to analyze a single web site in its entirety would therefore not be able to turn to Common Crawl and would instead have to conduct their own crawl of the site or turn to a commercial aggregator that partners with the content holder to license the complete contents of the site.
286
287In Common Crawl’s view this is a critical distinction that sets it apart from both traditional web archiving and the commercial content aggregators that generate data mining revenue for content owners. By focusing on creating a “representative sample” of the web at large, rather than attempting to capture a single site in its entirety (and in fact ensuring that it does not include more than a certain number of pages per site), the crawl self-limits itself to being applicable only to macro-level research examining web scale questions. Such “web scale” questions cannot be answered through any existing open dataset and by incorporating specific design features Common Crawl ensures that more traditional research questions, like data mining the entirety of a single site, which might be viewed as redistribution of that site or competing with its owner’s ability to license its content for data mining, is simply not possible.
288------------
Note: See TracBrowser for help on using the repository browser.