Ignore:
Timestamp:
2019-11-13T23:08:37+13:00 (4 years ago)
Author:
ak19
Message:

Having finished sending all the crawl data to mongodb 1. Recrawled the 2 sites which I had earlier noted required recrawling 00152, 00332. 00152 required changes to how it needed to be crawled. MP3 files needed to be blocked, as there were HBase error messages about key values being too large. 2. Modified the regex-urlfilter.GS_TEMPLATE file for this to block mp3 files in general for future crawls too (in the location of the file where jpg etc were already blocked by nutch's default regex url filters). 3. Further had to control the 00152 site to only be crawled under its /maori/ sub-domain. Since the seedURL maori.html was not off a /maori/ url, this revealed that the CCWETProcessor code didn't already consider allowing filters to okay seedURLs even where the crawl was controlled to run over a subdomain (as expressed in conf/sites-too-big-to-exhaustively-crawl file) but where the seedURL didn't match these controlled regex filters. So now, in such cases, the CCWETProcessor adds seedURLs that don't match to the filters too (so we get just the single file of the seedURL pages) besides a filter on the requested subdomain, so we follow all pages linked by the seedURLs that match the subdomain expression. 4. Adding to_crawl.tar.gz to svn, the tarball of the sites to_crawl that I actually ran nutch over, of all the sites folders with their seedURL.txt and regex-urlfilter.txt files that the batchcrawl.sh runs over. This didn't use the latest version of the sites folder and blacklist/whitelist files generated by CCWETProcessor, since the latest version was regenerated after the final modifications to CCWETProcessor which was after crawling was finished. But to_crawl.tar.gz does have a manually modified 00152, wit the correct regex-urlfilter file and uses the newer regex-urlfilter.GS_TEMPLATE file that blocks mp3 files. 5. crawledNode6.tar.gz now contains the dump output for sites 00152 and 00332, which were crawled on node6 today (after which their processed dump.txt file results were added into MongoDB). 7. MoreReading/mongodb.txt now contains the results of some queries I ran against the total nutch-crawled data.

File:
1 edited

Legend:

Unmodified
Added
Removed
  • other-projects/maori-lang-detection/conf/sites-too-big-to-exhaustively-crawl.txt

    r33604 r33666  
    3030#     However, if the seedurl's domain is an exact match on topsite-base-url, the seedurl will go
    3131#     into the file unprocessed-topsite-matches.txt and the site/page won't be crawled.
    32 #   - FOLLOW-LINKS-WITHIN-TOPSITE: if pages linked from the seedURL page can be followed and
    33 #     downloaded, as long as it's within the same subdomain matching the topsite-base-url.
     32#   - FOLLOW-LINKS-WITHIN-TOPSITE: download seedURL pages and pages linked from each seedURL
     33#     page should be followed and downloaded too, as long as they're within the same subdomain
     34#     matching the topsite-base-url.
    3435#     This is different from SUBDOMAIN-COPY, as that can download all of a specific subdomain but
    3536#     restricts against downloading the entire domain (e.g. all pinky.blogspot.com and not anything
     
    6162# special case
    6263mi.centr-zashity.ru,SINGLEPAGE
     64
     65# we want the http://loquevendra318.com/fox/maori.html seed URL but also
     66# pages within the following subsection
     67loquevendra318.com,loquevendra318.com/fox/maori/
    6368
    6469martinvrijland.nl,martinvrijland.nl/mi/
Note: See TracChangeset for help on using the changeset viewer.