https://codereview.stackexchange.com/questions/198343/crawl-and-gather-all-the-urls-recursively-in-a-domain http://lucene.472066.n3.nabble.com/Using-nutch-just-for-the-crawler-fetcher-td611918.html https://www.quora.com/What-are-some-Web-crawler-tips-to-avoid-crawler-traps https://cwiki.apache.org/confluence/display/nutch/ https://cwiki.apache.org/confluence/display/NUTCH/Nutch2Crawling https://cwiki.apache.org/confluence/display/nutch/ReaddbOptions https://moz.com/top500 ----------- NUTCH ----------- https://stackoverflow.com/questions/35449673/nutch-and-solr-indexing-blacklist-domain https://nutch.apache.org/apidocs/apidocs-1.6/org/apache/nutch/urlfilter/domainblacklist/DomainBlacklistURLFilter.html https://lucene.472066.n3.nabble.com/blacklist-for-crawling-td618343.html https://lucene.472066.n3.nabble.com/Content-of-size-X-was-truncated-to-Y-td4003517.html Google: nutch mirror web site https://stackoverflow.com/questions/33354460/nutch-clone-website [https://stackoverflow.com/questions/35714897/nutch-not-crawling-entire-website fetch -all seems to be a nutch v2 thing?] Google (30 Sep): site mirroring with nutch https://grokbase.com/t/nutch/user/125sfbg0pt/using-nutch-for-web-site-mirroring https://lucene.472066.n3.nabble.com/Using-nutch-just-for-the-crawler-fetcher-td611918.html http://www.cs.ucy.ac.cy/courses/EPL660/lectures/lab6.pdf slide p.5 onwards crawler softw options: https://repositorio.iscte-iul.pt/bitstream/10071/2871/1/Building%20a%20Scalable%20Index%20and%20Web%20Search%20Engine%20for%20Music%20on.pdf See also p.20. HTTrack Google: nutch performance tuning * https://stackoverflow.com/questions/24383212/apache-nutch-performance-tuning-for-whole-web-crawling * https://stackoverflow.com/questions/4871972/how-to-speed-up-crawling-in-nutch * https://cwiki.apache.org/confluence/display/nutch/OptimizingCrawls NUTCH INSTALLATION: * Nutch v1: https://cwiki.apache.org/confluence/display/nutch/NutchTutorial#NutchTutorial-SetupSolrforsearch Nutch v2 installation and set up: * https://cwiki.apache.org/confluence/display/NUTCH/Nutch2Tutorial * https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781783286850/1/ch01lvl1sec09/installing-and-configuring-apache-nutch Nutch doesn't work with spark (yet): https://stackoverflow.com/questions/29950299/distributed-web-crawling-using-apache-spark-is-it-possible SOLR: * Query syntax: http://www.solrtutorial.com/solr-query-syntax.html * Deleting a core: https://factorpad.com/tech/solr/reference/solr-delete.html * If you change a nutch 2 configuration, https://stackoverflow.com/questions/16401667/java-lang-classnotfoundexception-org-apache-gora-hbase-store-hbasestore explains you can rebuild nutch with: cd ant clean ant runtime ---------------------------------- Apache Nutch 2 with newer HBase hbase-common-1.4.8.jar 1. hbase jar files need to go into runtime/local/lib But not slf4j-log4j12-1.7.10.jar (there's already a slf4j-log4j12-1.7.5.jar) - so remove that from runtime/local/lib after copying it over. 2. https://stackoverflow.com/questions/46340416/how-to-compile-nutch-2-3-1-with-hbase-1-2-6 https://stackoverflow.com/questions/39834423/apache-nutch-fetcherjob-throws-nosuchelementexception-deep-in-gora/39837926#39837926 Unfortunately, the page https://paste.apache.org/jjqz referred to above that contains patches for using Gora 0.7 is no longer available. http://mail-archives.apache.org/mod_mbox/nutch-user/201602.mbox/%3C56B2EA23.8080801@cisinlabs.com%3E https://www.mail-archive.com/user@nutch.apache.org/msg14245.html ------------------------------------------------------------------------------ Other way: Nutch on its own vagrant with specified hbase or nutch with mongodb ------------------------------------------------------------------------------ * https://lobster1234.github.io/2017/08/14/search-with-nutch-mongodb-solr/ * https://waue0920.wordpress.com/2016/08/25/nutch-2-3-1-hbase-0-98-hadoop-2-5-solr-4-10-3/ The older but recommended hbase 0.98.21 for hadoop 2 can be downloaded from https://archive.apache.org/dist/hbase/0.98.21/ ----- HBASE commands /usr/local/hbase/bin/hbase shell https://learnhbase.net/2013/03/02/hbase-shell-commands/ http://dwgeek.com/read-hbase-table-using-hbase-shell-get-command.html/ dropping tables: https://www.tutorialspoint.com/hbase/hbase_drop_table.htm > list davidbHomePage_webpage is a table > get 'davidbHomePage_webpage', '1' Solution to get a working nutch2: get http://trac.greenstone.org/browser/gs3-extensions/maori-lang-detection/hdfs-cc-work/vagrant-for-nutch2.tar.gz And follow the instructions in my README file in there. --------------------------------------------------------------------- ALTERNATIVES TO NUTCH - looking for site mirroring capabilities --------------------------------------------------------------------- => https://anarc.at/services/archive/web/ Autistici's crawl [https://git.autistici.org/ale/crawl] needs Go: https://medium.com/better-programming/install-go-1-11-on-ubuntu-18-04-16-04-lts-8c098c503c5f https://guide.freecodecamp.org/go/installing-go/ubuntu-apt-get/ To uninstall: https://medium.com/@firebitsbr/how-to-uninstall-from-the-apt-manager-uninstall-just-golang-go-from-universe-debian-ubuntu-82d6a3692cbd https://tecadmin.net/install-go-on-ubuntu/ [our vagrant VMs are Ubuntu 16.04 LTS, as discovered by running the cmd "lsb_release -a"] https://alternativeto.net/software/apache-nutch/ https://alternativeto.net/software/wget/ https://github.com/ArchiveTeam/grab-site/blob/master/README.md#inspecting-warc-files-in-the-terminal https://github.com/ArchiveTeam/wpull ------------------- Running nutch 2.x ------------------- LINKS https://lucene.472066.n3.nabble.com/Nutch-2-x-readdb-command-dump-td4033937.html https://cwiki.apache.org/confluence/display/nutch/ReaddbOptions https://lobster1234.github.io/2017/08/14/search-with-nutch-mongodb-solr/ ## most useful for running nutch 2.x crawls https://www.mobomo.com/2017/06/the-basics-working-with-nutch-2-x/ "Fetch This is where the magic happens. During the fetch step, Nutch crawls the urls selected in the generate step. The most important argument you need is -threads: this sets the number of fetcher threads per task. Increasing this will make crawling faster, but setting it too high can overwhelm a site and it might shut out your crawler, as well as take up too much memory from your machine. Run it like this: $ nutch fetch -threads 50" https://examples.javacodegeeks.com/enterprise-java/apache-hadoop/apache-hadoop-nutch-tutorial/ https://www.yegor256.com/2019/04/17/nutch-from-java.html http://nutch.sourceforge.net/docs/en/tutorial.html Intranet: Configuration To configure things for intranet crawling you must: Create a flat file of root urls. For example, to crawl the nutch.org site you might start with a file named urls containing just the Nutch home page. All other Nutch pages should be reachable from this page. The urls file would thus look like: http://www.nutch.org/ Edit the file conf/crawl-urlfilter.txt and replace MY.DOMAIN.NAME with the name of the domain you wish to crawl. For example, if you wished to limit the crawl to the nutch.org domain, the line should read: +^http://([a-z0-9]*\.)*nutch.org/ This will include any url in the domain nutch.org. Intranet: Running the Crawl Once things are configured, running the crawl is easy. Just use the crawl command. Its options include: -dir dir names the directory to put the crawl in. -depth depth indicates the link depth from the root page that should be crawled. -delay delay determines the number of seconds between accesses to each host. -threads threads determines the number of threads that will fetch in parallel. For example, a typical call might be: bin/nutch crawl urls -dir crawl.test -depth 3 >& crawl.log Typically one starts testing one's configuration by crawling at low depths, and watching the output to check that desired pages are found. Once one is more confident of the configuration, then an appropriate depth for a full crawl is around 10. <=========== Once crawling has completed, one can skip to the Searching section below. ----------------------------------- Actually running nutch 2.x - steps ----------------------------------- MANUALLY GOING THROUGH THE CYCLE 3 TIMES: cd ~/apache-nutch-2.3.1/runtime/local ./bin/nutch inject urls ./bin/nutch generate -topN 50 ./bin/nutch fetch -all ./bin/nutch parse -all ./bin/nutch updatedb -all ./bin/nutch generate -topN 50 ./bin/nutch fetch -all ./bin/nutch parse -all ./bin/nutch updatedb -all ./bin/nutch generate -topN 50 ./bin/nutch fetch -all ./bin/nutch parse -all ./bin/nutch updatedb -all Dump output on local filesystem: rm -rf /tmp/bla ./bin/nutch readdb -dump /tmp/bla [-crawlId ID -text] less /tmp/bla/part-r-00000 To dump output on local filesystem: Need hdfs host name if sending/dumping nutch crawl output to a location on hdfs Host is defined in /usr/local/hadoop/etc/hadoop/core-site.xml for property fs.defaultFS, (https://stackoverflow.com/questions/27956973/java-io-ioexception-incomplete-hdfs-uri-no-host) host is hdfs://node2/ in this case. So: hdfs dfs -rmdir /user/vagrant/dump XXX ./bin/nutch readdb -dump user/vagrant/dump -text ### won't work XXX ./bin/nutch readdb -dump hdfs:///user/vagrant/dump -text ### won't work ./bin/nutch readdb -dump hdfs://node2/user/vagrant/dump -text USING THE SCRIPT TO ATTEMPT TO CRAWL A SITE * Choosing to repeat the cycle 10 times because, as per http://nutch.sourceforge.net/docs/en/tutorial.html "Typically one starts testing one's configuration by crawling at low depths, and watching the output to check that desired pages are found. Once one is more confident of the configuration, then an appropriate depth for a full crawl is around 10." * Use the ./bin/crawl script, provide the seed urls dir, the crawlId and number of times to repeat = 10 vagrant@node2:~/apache-nutch-2.3.1/runtime/local$ ./bin/crawl urls davidbHomePage 10 * View the downloaded crawls. This time need to provide crawlId to readdb, in order to get a dump of its text contents: hdfs dfs -rm -r hdfs://node2/user/vagrant/dump2 ./bin/nutch readdb -dump hdfs://node2/user/vagrant/dump2 -text -crawlId davidbHomePage * View the contents: hdfs dfs -cat hdfs://node2/user/vagrant/dump2/part-r-* * FIND OUT NUMBER OF URLS DOWNLOADED FOR THE SITE: vagrant@node2:~/apache-nutch-2.3.1/runtime/local$ ./bin/nutch readdb -stats -crawlId davidbHomePage WebTable statistics start Statistics for WebTable: retry 0: 44 status 5 (status_redir_perm): 4 status 3 (status_gone): 1 status 2 (status_fetched): 39 jobs: {[davidbHomePage]db_stats-job_local647846559_0001={jobName=[davidbHomePage]db_stats, jobID=job_local647846559_0001, counters={Map-Reduce Framework={MAP_OUTPUT_MATERIALIZED_BYTES=135, REDUCE_INPUT_RECORDS=8, SPILLED_RECORDS=16, MERGED_MAP_OUTPUTS=1, VIRTUAL_MEMORY_BYTES=0, MAP_INPUT_RECORDS=44, SPLIT_RAW_BYTES=935, FAILED_SHUFFLE=0, MAP_OUTPUT_BYTES=2332, REDUCE_SHUFFLE_BYTES=135, PHYSICAL_MEMORY_BYTES=0, GC_TIME_MILLIS=0, REDUCE_INPUT_GROUPS=8, COMBINE_OUTPUT_RECORDS=8, SHUFFLED_MAPS=1, REDUCE_OUTPUT_RECORDS=8, MAP_OUTPUT_RECORDS=176, COMBINE_INPUT_RECORDS=176, CPU_MILLISECONDS=0, COMMITTED_HEAP_BYTES=595591168}, File Input Format Counters ={BYTES_READ=0}, File System Counters={FILE_LARGE_READ_OPS=0, FILE_WRITE_OPS=0, FILE_READ_OPS=0, FILE_BYTES_WRITTEN=1788140, FILE_BYTES_READ=1223290}, File Output Format Counters ={BYTES_WRITTEN=275}, Shuffle Errors={CONNECTION=0, WRONG_LENGTH=0, BAD_ID=0, WRONG_MAP=0, WRONG_REDUCE=0, IO_ERROR=0}}}} TOTAL urls: 44 max score: 1.0 avg score: 0.022727273 min score: 0.0 WebTable statistics: done ------------------------------------ STOPPING CONDITION Seems inbuilt * When I tell it to cycle 15 times, it stops after 6 cycles saying no more URLs to fetch: vagrant@node2:~/apache-nutch-2.3.1/runtime/local$ ./bin/crawl urls davidbHomePage2 15 --- No SOLRURL specified. Skipping indexing. Injecting seed URLs ... Thu Oct 3 09:22:23 UTC 2019 : Iteration 6 of 15 Generating batchId Generating a new fetchlist ... Generating batchId Generating a new fetchlist /home/vagrant/apache-nutch-2.3.1/runtime/local/bin/nutch generate -D mapred.reduce.tasks=2 -D mapred.child.java.opts=-Xmx1000m -D mapred.reduce.tasks.speculative.execution=false -D mapred.map.tasks.speculative.execution=false -D mapred.compress.map.output=true -topN 50000 -noNorm -noFilter -adddays 0 -crawlId davidbHomePage2 -batchId 1570094569-27637 GeneratorJob: starting at 2019-10-03 09:22:49 GeneratorJob: Selecting best-scoring urls due for fetch. GeneratorJob: starting GeneratorJob: filtering: false GeneratorJob: normalizing: false GeneratorJob: topN: 50000 GeneratorJob: finished at 2019-10-03 09:22:52, time elapsed: 00:00:02 GeneratorJob: generated batch id: 1570094569-27637 containing 0 URLs Generate returned 1 (no new segments created) Escaping loop: no more URLs to fetch now vagrant@node2:~/apache-nutch-2.3.1/runtime/local$ --- * running readdb -stats show 44 URLs fetched, just as first time (when crawlId had been "davidbHomePage"): vagrant@node2:~/apache-nutch-2.3.1/runtime/local$ ./bin/nutch readdb -stats -crawlId davidbHomePage2 --- WebTable statistics start Statistics for WebTable: retry 0: 44 status 5 (status_redir_perm): 4 status 3 (status_gone): 1 status 2 (status_fetched): 39 jobs: {[davidbHomePage2]db_stats-job_local985519583_0001={jobName=[davidbHomePage2]db_stats, jobID=job_local985519583_0001, counters={Map-Reduce Framework={MAP_OUTPUT_MATERIALIZED_BYTES=135, REDUCE_INPUT_RECORDS=8, SPILLED_RECORDS=16, MERGED_MAP_OUTPUTS=1, VIRTUAL_MEMORY_BYTES=0, MAP_INPUT_RECORDS=44, SPLIT_RAW_BYTES=935, FAILED_SHUFFLE=0, MAP_OUTPUT_BYTES=2332, REDUCE_SHUFFLE_BYTES=135, PHYSICAL_MEMORY_BYTES=0, GC_TIME_MILLIS=4, REDUCE_INPUT_GROUPS=8, COMBINE_OUTPUT_RECORDS=8, SHUFFLED_MAPS=1, REDUCE_OUTPUT_RECORDS=8, MAP_OUTPUT_RECORDS=176, COMBINE_INPUT_RECORDS=176, CPU_MILLISECONDS=0, COMMITTED_HEAP_BYTES=552599552}, File Input Format Counters ={BYTES_READ=0}, File System Counters={FILE_LARGE_READ_OPS=0, FILE_WRITE_OPS=0, FILE_READ_OPS=0, FILE_BYTES_WRITTEN=1788152, FILE_BYTES_READ=1223290}, File Output Format Counters ={BYTES_WRITTEN=275}, Shuffle Errors={CONNECTION=0, WRONG_LENGTH=0, BAD_ID=0, WRONG_MAP=0, WRONG_REDUCE=0, IO_ERROR=0}}}} TOTAL urls: 44 --- ---------------------------------------------------------------------- Testing URLFilters: testing a URL to see if it's accepted ---------------------------------------------------------------------- Use the command ./bin/nutch org.apache.nutch.net.URLFilterChecker -allCombined (mentioned at https://lucene.472066.n3.nabble.com/Correct-syntax-for-regex-urlfilter-txt-trying-to-exclude-single-path-results-td3600376.html) Use as follows: cd apache-nutch-2.3.1/runtime/local ./bin/nutch org.apache.nutch.net.URLFilterChecker -allCombined Then paste the URL you want to test, press Enter. A + in front of response means accepted A - in front of response means rejected. Can continue pasting URLs to test against filters until you send Ctrl-D to terminate input.