Timeline


and .

01.11.2019:

20:14 Changeset [33618] by ak19
Adding in the download URL
17:13 Changeset [33617] by ak19
Node5 is now full and here is the finished crawl (up to and including site …

31.10.2019:

20:05 Changeset [33616] by ak19
Beginnings of Java class that is to interact with MongoDB. I don't yet …
20:03 Changeset [33615] by ak19
1. Worked out how to configure log4j to log both to console and logfile, …
11:22 Changeset [33614] by kjdon
added a new line
11:18 Changeset [33613] by kjdon
added allowdocumentediting and allowmapgpsediting options, plus also added …
11:00 Changeset [33612] by kjdon
work to do with params. add in default values to params if they are not …
10:55 Changeset [33611] by kjdon
added global setting to params - thesea re for params that are valid …
10:54 Changeset [33610] by kjdon
USER_SESSION_CACHE_ATT moved to GSParams, as it is stored in session likea …

30.10.2019:

23:03 Changeset [33609] by ak19
The tar files containing the crawled sites data shouldn't be called tar.gz …
23:02 Changeset [33608] by ak19
1. New script to export from HBase so that we could in theory reimport …

29.10.2019:

18:33 Changeset [33607] by ak19
Updated with the remaining successfully crawled sites on node4 before …
15:18 Changeset [33606] by ak19
1. Committing crawl data from node3 (2nd VM for nutch crawling). 2. …
14:54 Changeset [33605] by ak19
Node 4 VM still works, but committing first set of crawled sites on there

24.10.2019:

23:22 Changeset [33604] by ak19
1. Better output into possible-product-sites.txt including the overseas …
22:04 Changeset [33603] by ak19
Incorporating Dr Nichols suggestion to help weed out product sites: if tld …

23.10.2019:

23:49 Changeset [33602] by ak19
1. The final csv file, mri-sentences.csv, is now written out. 2. Only …
23:22 Changeset [33601] by ak19
Creates the 2nd csv file, with info about webpages. At present stores …
23:05 Changeset [33600] by ak19
Work in progress of writing out CSV files. In future, may write the same …

22.10.2019:

20:49 Changeset [33599] by ak19
First one-third sites crawled. Committing to SVN despite the tarred …
20:19 Changeset [33598] by ak19
More instructions on setting up Nutch now that I've remembered to commit …
20:05 Changeset [33597] by ak19
Committing active version of template file which has a newline at end of …
18:44 Changeset [33596] by ak19
Adding in the nutch-site.xml and regex-urlfilter.GS_TEMPLATE template file …
14:05 Changeset [33595] by kjdon
new displayBaskets template - to avoid replicating code in query and …
14:00 Changeset [33594] by kjdon
call gslib:displayBasket instead of replicating the code here
13:59 Changeset [33593] by kjdon
the test for facets should be facetList/facet/count, as the facets get …
13:51 Changeset [33592] by kjdon
reindented the file
11:51 Changeset [33591] by kjdon
added in some strings for 'this collection contains x documents and was …
11:12 Changeset [33590] by kjdon
added 'this colleciton contains X documents and was last build Y days ago' …

21.10.2019:

21:45 Changeset [33589] by cpb16
final01. Need Map results still

18.10.2019:

23:20 Changeset [33588] by ak19
Committing the MRI sentence model that I'm actually using, the one in my …
23:16 Changeset [33587] by ak19
1. Better stats reporting on crawled sites: not just if a page was in MRI …
22:20 Changeset [33586] by ak19
Refactored MaoriTextDetector?.java class into more general …
21:41 Changeset [33585] by ak19
Much simpler way of using sentence and language detection model to work on …
21:20 Changeset [33584] by ak19
Committing experimental version 2 using the sentence detector model, …
21:20 Changeset [33583] by ak19
Committing experimental version 1 using the sentence detector model, …

17.10.2019:

23:12 Changeset [33582] by ak19
NutchTextDumpProcessor? prints each crawled site's stats: number of …
21:53 Changeset [33581] by ak19
Minor fix. Noticed when looking for work I did on MRI sentence detection
21:44 Changeset [33580] by ak19
Finally fixed the thus-far identified bugs when parsing dump.txt.
21:05 Changeset [33579] by ak19
Debugging. Solved one problem.
19:31 Changeset [33578] by ak19
Corrections for compiling the 2 new classes.
19:12 Changeset [33577] by ak19
Forgot to adjust usage statement to say that silent mode was already …

16.10.2019:

23:37 Changeset [33576] by ak19
Introducing 2 new Java files still being written and untested. …
23:36 Changeset [33575] by ak19
Correcting usage string for CCWETProcessor before committing new java …
23:35 Changeset [33574] by ak19
If nutch stores a crawled site in more than 1 file, then cat all of them …
21:39 Changeset [33573] by ak19
Forgot to document that spaces were also allowed as separator in the input …
21:18 Changeset [33572] by ak19
Only meant to store the wet.gz versions of these files, not also the …
21:11 Changeset [33571] by ak19
Adding Dr Bainbridge's suggestion of appending the crawlId of each site to …
20:04 Changeset [33570] by ak19
Need to check if UNFINISHED file actually exists before moving it across …
20:00 Changeset [33569] by ak19
1. batchcrawl.sh now does what it should have from the start, which is to …

14.10.2019:

23:36 Changeset [33568] by ak19
1. More sites greylisted and blacklisted, discovered as I attempted to …
22:40 Changeset [33567] by ak19
batchcrawl.sh now supports -all flag (and prints usage on 0 args). The …
22:07 Changeset [33566] by ak19
batchcrawl.sh script now supports taking a comma or space separated list …
21:04 Changeset [33565] by ak19
CCWETProcessor: domain url now goes in as a seedURL after the individual …
21:01 Changeset [33564] by ak19
batchcrawl.sh now does the crawl and logs output of the crawl, dumps text …

11.10.2019:

23:29 Changeset [33563] by ak19
Committing inactive testing batch scripts (only creates the …
21:52 Changeset [33562] by ak19
1. The sites-too-big-to-exhaustively-crawl.txt is now a csv file of a …
20:49 Changeset [33561] by ak19
1. sites-too-big-to-exhaustively-crawl.txt is now a comma separated list. …

10.10.2019:

23:49 Changeset [33560] by ak19
1. Incorporated Dr Bainbridge's suggested improvements: only when there is …
23:44 Changeset [33559] by ak19
1. Special string COPY changed to SUBDOMAIN-COPY after Dr Bainbridge …
23:41 Changeset [33558] by ak19
Committing cumulative changes since last commit.

09.10.2019:

23:10 Changeset [33557] by ak19
Implemented the topSitesMap of topsite domain to url pattern in the only …
18:58 Changeset [33556] by ak19
Blacklisted wikipedia pages that are actually in other languages which had …
18:43 Changeset [33555] by ak19
Modified top sites list as Dr Bainbridge described: suffixes for the same …
18:11 Changeset [33554] by ak19
Added more to blacklist and greylist. And removed remaining duplicates …

04.10.2019:

22:19 Changeset [33553] by ak19
Comments
22:00 Changeset [33552] by ak19
1. Code now processes ccrawldata folder, containing each individual common …
19:35 Changeset [33551] by ak19
Added in top 500 urls from moz.com/top500 and removed duplicates, and …
19:06 Changeset [33550] by ak19
First stage of introducing sites-too-big-to-exhaustively-crawl.tx: split …
18:29 Changeset [33549] by ak19
All the downloaded commoncrawl MRI warc.wet.gz data from Sep 2018 (when …
14:36 Changeset [33548] by davidb
Include new wavesurfer sub-project to install
14:30 Changeset [33547] by davidb
Initial cut at wavesurfer JS audio player version of AMC music content …
14:19 Changeset [33546] by davidb
Initial cut at wave-surfer based JS audio player extension for Greenstone

03.10.2019:

22:38 Changeset [33545] by ak19
Mainly changes to crawling-Nutch.txt and some minor changes to other txt …
18:56 Changeset [33544] by ak19
1. Dr Bainbridge had the correct fix for solr dealing with phrase …
Note: See TracTimeline for information about the timeline view.