source: main/trunk/greenstone2/perllib/plugins/NutchTextDumpPlugin.pm@ 34131

Last change on this file since 34131 was 34131, checked in by ak19, 4 years ago

Allowing input keep-urls-file to contain a comma followed by country code at end, as that's the sort of URLs file I want for the newest commoncrawl collection. The URLs file is the one at http://trac.greenstone.org/browser/other-projects/maori-lang-detection/mongodb-data-auto/isMRI_full_manualList_globalDomains_whereAPageContainsMRI.txt

File size: 33.0 KB
Line 
1###########################################################################
2#
3# NutchTextDumpPlugin.pm -- plugin for dump.txt files generated by Nutch
4#
5# A component of the Greenstone digital library software
6# from the New Zealand Digital Library Project at the
7# University of Waikato, New Zealand.
8#
9# Copyright (C) 2002 New Zealand Digital Library Project
10#
11# This program is free software; you can redistribute it and/or modify
12# it under the terms of the GNU General Public License as published by
13# the Free Software Foundation; either version 2 of the License, or
14# (at your option) any later version.
15#
16# This program is distributed in the hope that it will be useful,
17# but WITHOUT ANY WARRANTY; without even the implied warranty of
18# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
19# GNU General Public License for more details.
20#
21# You should have received a copy of the GNU General Public License
22# along with this program; if not, write to the Free Software
23# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
24#
25###########################################################################
26
27# This plugin originally created to process Nutch dump.txt files produced from recrawling commoncrawl (CC)
28# results for pages detected by CC as being in Maori.
29# It splits each web site's dump.txt into its individual records: as each record represents a web page,
30# this produces one greenstone document per web page.
31#
32# For a commoncrawl collection of siteID-labelled folders containing dump.txt files each,
33# - set <importOption name="OIDtype" value="dirname"/>
34# - Create 2 List browsing classifiers (with bookshelf_type set to always) on ex.siteID and ex.srcDomain
35# both sorted by ex.srcURL, and an ex.Title classifier.
36# For the ex.srcDomain classifier, set removeprefix to: https?\:\/\/(www\.)?
37# An alternative is to build that List classifier on ex.basicDomain instead of ex.srcDomain.
38# Set this List classifier's "partition_type_within_level" option to "per_letter".
39# - Add search indexes on text (default), Title, basicDomain, siteID, Identifier, srcURL (not working)
40#
41# Finally, in the "display" format statement, add the following before the "wrappedSectionText" to
42# display the most relevant metadata of each record:
43 # <gsf:template name="documentContent">
44 # <div id="nutch-dump-txt-record">
45 # <h3>Record:</h3>
46 # <br/>
47 # <dl>
48 # <dt>URL:</dt>
49 # <dd>
50 # <gsf:metadata name="srcURL"/>
51 # </dd>
52 # <dt>Title:</dt>
53 # <dd>
54 # <gsf:metadata name="ex.Title"/>
55 # </dd>
56 # <dt>Identifier:</dt>
57 # <dd>
58 # <gsf:metadata name="Identifier"/>
59 # </dd>
60 # <dt>SiteID:</dt>
61 # <dd>
62 # <gsf:metadata name="siteID"/>
63 # </dd>
64 # <dt>Status:</dt>
65 # <dd>
66 # <gsf:metadata name="status"/>
67 # </dd>
68 # <dt>ProtocolStatus:</dt>
69 # <dd>
70 # <gsf:metadata name="protocolStatus"/>
71 # </dd>
72 # <dt>ParseStatus:</dt>
73 # <dd>
74 # <gsf:metadata name="parseStatus"/>
75 # </dd>
76 # <dt>CharEncodingForConversion:</dt>
77 # <dd>
78 # <gsf:metadata name="CharEncodingForConversion"/>
79 # </dd>
80 # <dt>OriginalCharEncoding:</dt>
81 # <dd>
82 # <gsf:metadata name="OriginalCharEncoding"/>
83 # </dd>
84 # </dl>
85 # </div>
86
87# + DONE: remove illegible values for metadata _rs_ and _csh_ in the example below before
88# committing, in case their encoding affects the loading/reading in of this perl file.
89#
90# Example record in dump.txt to process:
91 # https://www.whanau-tahi.school.nz/ key: nz.school.whanau-tahi.www:https/
92 # OR: http://yutaka.it-n.jp/apa/750010010.html key: jp.it-n.yutaka:http/apa/750010010.html
93 # baseUrl: null
94 # status: 2 (status_fetched)
95 # fetchTime: 1575199241154
96 # prevFetchTime: 1572607225779
97 # fetchInterval: 2592000
98 # retriesSinceFetch: 0
99 # modifiedTime: 0
100 # prevModifiedTime: 0
101 # protocolStatus: SUCCESS, args=[]
102 # signature: d84c84ccf0c86aa16a19e03cb1fc5827
103 # parseStatus: success/ok (1/0), args=[]
104 # title: Te Kura Kaupapa Māori o Te Whānau Tahi
105 # score: 1.0
106 # marker _injmrk_ : y
107 # marker _updmrk_ : 1572607228-9584
108 # marker dist : 0
109 # reprUrl: null
110 # batchId: 1572607228-9584
111 # metadata CharEncodingForConversion : utf-8
112 # metadata OriginalCharEncoding : utf-8
113 # metadata _rs_ :
114 # metadata _csh_ :
115 # text:start:
116 # Te Kura Kaupapa Māori o Te Whānau Tahi He mihi He mihi Te Kaupapa Ngā Tāngata Te Kākano Te Pihinga Te Tipuranga Te Puāwaitanga Te Tari Te Poari Matua Whakapā mai He mihi He mihi Te Kaupapa Ngā Tāngata Te Kākano Te Pihinga Te Tipuranga Te Puāwaitanga Te Tari Te Poari Matua Whakapā mai TE KURA KAUPAPA MĀORI O TE WHĀNAU TAHI He mihi Kei te mōteatea tonu nei ngā mahara ki te huhua kua mene atu ki te pō, te pōuriuri, te pōtangotango, te pō oti atu rā. Kua rite te wāhanga ki a rātou, hoki mai ki te ao tūroa nei Ko Io Matua Kore te pūtaketanga, te pūkaea, te pūtātara ka rangona whānuitia e te ao. Ko tāna ko ngā whetū, te marama, te haeata ki a Tamanui te rā. He atua i whakateretere mai ai ngā waka i tawhiti nui, i tawhiti roa, i tawhiti mai rā anō. Kei nga ihorei, kei ngā wahapū, kei ngā pukumahara, kei ngā kanohi kai mātārae o tō tātou nei kura Aho Matua, Te Kura Kaupapa Māori o Te Whanau Tahi. Anei rā te maioha ki a koutou katoa e pūmau tonu ki ngā wawata me ngā whakakitenga i whakatakotoria e ngā poupou i te wā i a rātou. Ka whakanuia hoki te toru tekau tau o tēnei kura mai i tōna orokohanga timatanga tae noa ki tēnei wā Ka pūmau tōnu mātou ki te whakatauki o te kura e mea ana “Poipoia ō tātou nei pūmanawa” Takiritia tonutia te ra ki runga i Te Kura Kaupapa Maori o Te Whanau Tahi . Back to Top " Poipoia ō tātou nei pūmanawa -  Making our potential a reality "   ©  Te Kura Kaupapa Māori o Te Whānau Tahi, 2019  Cart ( 0 )
117 # text:end:
118 #
119 # https://www.whanau-tahi.school.nz/cart key: nz.school.whanau-tahi.www:https/cart
120 # baseUrl: null
121 # status: 2 (status_fetched)
122 # ...
123#
124# - Some records may have empty text content between the text:start: and text:end: markers,
125# while other records may be missing these markers along with any text.
126# - Metadata is of the form key : value, but some metadata values contain ":", for example
127# "protocolStatus" metadata can contain a URL for value, including protocol that contains ":".
128# - metadata _rs_ and _csh_ contain illegible values, so this code discards them when storing metadata.
129#
130# If you provide a keep_urls_file when configuring NutchTextDumpPlugin, then if relative the path is relative
131# it will check the collection's etc folder for a urls.txt file.
132
133
134package NutchTextDumpPlugin;
135
136use SplitTextFile;
137
138use Encode;
139use unicode;
140use util;
141
142use strict;
143no strict 'refs'; # allow filehandles to be variables and viceversa
144
145# TODO:
146# + 1. Split each dump.txt file into its individual records as individual docs
147# + 2. Store the meta of each individual record/doc
148# ?3. Name each doc, siteID.docID else HASH internal text. See EmailPlugin?
149# - In SplitTextFile::read(), why is $segment which counts discarded docs too used to add record ID
150# rather than $count which only counts included docs? I am referring to code:
151# $self->add_OID($doc_obj, $id, $segment);
152# The way I've solved this is by setting the OIDtype importOption. Not sure if this is what was required.
153# + 4. Keep a map of all URLs seen - whitelist URLs.
154# + 5. Implement the optional input file of URLs: if infile provided, keep only those records
155# whose URLs are in the map. Only these matching records should become docs.
156# 6. Rebuild full collection of all dump.txt files with this collection design.
157#
158# TIDY UP:
159# + Create util::trim()
160# + Add to perl's strings.properties: NutchTextDumpPlugin.keep_urls_file
161#
162# CLEANUP:
163# + Remove MetadataRead functions and inheritance
164#
165# QUESTIONS:
166# - encoding = utf-8, changed to "utf8" as required by copied to_utf8(str) method. Why does it not convert
167# the string parameter but fails in decode() step? Is it because the string is already in UTF8?
168# - Problem converting text with encoding in full set of nutch dump.txt when there encoding is windows-1252 and Shift-JIS.
169# - TODOs
170#
171
172# CHECK:
173# - title fallback is URL.
174# + util::tidy_up_OID() prints warning. SiteID is foldername and OIDtype=dirname, so fully numeric
175# siteID to OID conversion results in warning message that siteID is fully numeric and gets 'D' prefixed.
176# Is this warning still necessary?
177# - Ask about binmode usage (for debugging) in this file
178
179# To get all the isMRI results, I ran Robo-3T against our mongodb as
180# in the instructions at http://trac.greenstone.org/browser/other-projects/maori-lang-detection/MoreReading/mongodb.txt
181# Then I launched Robo-3T and connected to the mongodb
182#
183# Then in the "ateacrawldata" database, I ran the following queries
184# to get a URL listing of all the Webpages where isMRI = true as determined
185# by apache openNLP.
186#
187#db.getCollection('Webpages').find({isMRI:true}).count();
188#7830
189#
190#db.getCollection('Webpages').find({isMRI:true},{URL: 1, _id: 0});
191#
192#Then I set robo-3T's output display to display 8000 results on a page, then copied the results into this file below.
193#
194# I cleaned out all the JSON from the results using regex in Notepad++.
195# This then becomes our urls.txt file, which I put into the cc nutch crawl
196# GS3 collection's etc folder under the name isMRI_urls.txt,
197# to consider processing only webpages apache Open-NLP detected as isMRI
198# into our collection.
199# Remember to configure the NutchTextDumpPlugin with option "keep_urls_file" = isMRI_urls.txt to make use of this.
200#
201# + ex meta -> don't add with ex. prefix
202# + check for and call to setup_keep_urls(): move into process() rather than doing this in more convoluted way in can_process_this_file()
203# + util::tidy_up_oid() -> print callstack to find why it's called on every segment
204# X- binmode STDERR: work out what default mode on STDERR is and reset to that after printing debug messages in utf8 binmode
205# - test collection to check various encodings with and without to_utf8() function - tested collection 00436 in collection cctest3.
206# The srcURL .../divrey/shaar.htm (Identifier: D00436s184) is in Hebrew and described as being in char encoding iso-8859-8.
207# But when I paste the build output when using NutchTextDumpPlugin.pm_debug_iso-8859-8
208# into emacs, the text for this record reads and scrolls R to L in emacs.
209# When previewing the text in the full text section in GS3, it reads L to R.
210# The digits used in the text seem to match, occurring in reverse order from each other between emacs and GS3 preview.
211# Building displays error messages if to_utf8() called to decode this record's title meta or full text
212# using the discovered encoding.
213
214sub BEGIN {
215 @NutchTextDumpPlugin::ISA = ('SplitTextFile');
216 unshift (@INC, "$ENV{'GSDLHOME'}/perllib/cpan");
217}
218
219my $arguments =
220 [ { 'name' => "keep_urls_file",
221 'desc' => "{NutchTextDumpPlugin.keep_urls_file}",
222 'type' => "string",
223 #'deft' => "urls.txt",
224 'reqd' => "no" },
225 { 'name' => "process_exp",
226 'desc' => "{BaseImporter.process_exp}",
227 'type' => "regexp",
228 'reqd' => "no",
229 'deft' => &get_default_process_exp() },
230 { 'name' => "split_exp",
231 'desc' => "{SplitTextFile.split_exp}",
232 'type' => "regexp",
233 'reqd' => "no",
234 'deft' => &get_default_split_exp() }
235 ];
236
237my $options = { 'name' => "NutchTextDumpPlugin",
238 'desc' => "{NutchTextDumpPlugin.desc}",
239 'abstract' => "no",
240 'inherits' => "yes",
241 'explodes' => "yes",
242 'args' => $arguments };
243
244sub new {
245 my ($class) = shift (@_);
246 my ($pluginlist,$inputargs,$hashArgOptLists) = @_;
247 push(@$pluginlist, $class);
248
249 push(@{$hashArgOptLists->{"ArgList"}},@{$arguments});
250 push(@{$hashArgOptLists->{"OptList"}},$options);
251
252 my $self = new SplitTextFile($pluginlist, $inputargs, $hashArgOptLists);
253
254 if ($self->{'info_only'}) {
255 # don't worry about the options
256 return bless $self, $class;
257 }
258
259 $self->{'keep_urls_processed'} = 0;
260 $self->{'keep_urls'} = undef;
261
262 #return bless $self, $class;
263 $self = bless $self, $class;
264 # Can only call any $self->method(); AFTER the bless operation above, so from this point onward
265 return $self;
266}
267
268
269sub setup_keep_urls {
270 my $self = shift (@_);
271
272 my $verbosity = $self->{'verbosity'};
273 my $outhandle = $self->{'outhandle'};
274 my $failhandle = $self->{'failhandle'};
275
276 $self->{'keep_urls_processed'} = 1; # flag to track whether this method has been called already during import
277
278 #print $outhandle "@@@@ In NutchTextDumpPlugin::setup_keep_urls() - this method should only be called once and only during import.pl\n";
279
280 if(!$self->{'keep_urls_file'}) {
281 my $msg = "NutchTextDumpPlugin INFO: No urls file provided.\n" .
282 " No records will be filtered.\n";
283 print $outhandle $msg if ($verbosity > 2);
284
285 return;
286 }
287
288 # read in the keep urls files
289 my $keep_urls_file = &util::locate_config_file($self->{'keep_urls_file'});
290 if (!defined $keep_urls_file)
291 {
292 my $msg = "NutchTextDumpPlugin INFO: Can't locate urls file $keep_urls_file.\n" .
293 " No records will be filtered.\n";
294
295 print $outhandle $msg;
296
297 $self->{'keep_urls'} = undef;
298 # TODO: Not a fatal error if $keep_urls_file can't be found: it just means all records
299 # in dump.txt will be processed?
300 }
301 else {
302 #$self->{'keep_urls'} = $self->parse_keep_urls_file($keep_urls_file, $outhandle);
303 #$self->{'keep_urls'} = {};
304 $self->parse_keep_urls_file($keep_urls_file, $outhandle, $failhandle);
305 }
306
307 #if(defined $self->{'keep_urls'}) {
308 # print STDERR "@@@@ keep_urls hash map contains:\n";
309 # map { print STDERR $_."=>".$self->{'keep_urls'}->{$_}."\n"; } keys %{$self->{'keep_urls'}};
310 #}
311
312}
313
314
315sub parse_keep_urls_file {
316 my $self = shift (@_);
317 my ($urls_file, $outhandle, $failhandle) = @_;
318
319 # https://www.caveofprogramming.com/perl-tutorial/perl-hashes-a-guide-to-associative-arrays-in-perl.html
320 # https://stackoverflow.com/questions/1817394/whats-the-difference-between-a-hash-and-hash-reference-in-perl
321 $self->{'keep_urls'} = {}; # hash reference init to {}
322
323 # What if it is a very long file of URLs? Need to read a line at a time!
324 #my $contents = &FileUtils::readUTF8File($urls_file); # could just call $self->read_file() inherited from SplitTextFile's parent ReadTextFile
325 #my @lines = split(/(?:\r?\n)+/, $$textref);
326
327 # Open the file in UTF-8 mode https://stackoverflow.com/questions/2220717/perl-read-file-with-encoding-method
328 # and read in line by line into map
329 my $fh;
330 if (open($fh,'<:encoding(UTF-8)', $urls_file)) {
331 while (defined (my $line = <$fh>)) {
332 $line = &util::trim($line); #$line =~ s/^\s+|\s+$//g; # trim whitespace
333
334 if($line =~ m@^https?://@) { # add only URLs
335 # remove any ",COUNTRYCODE" at end
336 # country code can be NZ but also UNKNOWN, so not 2 chars
337 $line =~ s/,[A-Z]+$//;
338 #print STDERR "LINE: |$line|\n";
339 $self->{'keep_urls'}->{$line} = 1; # add the url to our perl hash
340 }
341 }
342 close $fh;
343 } else {
344 my $msg = "NutchTextDumpPlugin ERROR: Unable to open file keep_urls_file: \"" .
345 $self->{'keep_urls_file'} . "\".\n " .
346 " No records will be filtered.\n";
347 print $outhandle $msg;
348 print $failhandle $msg;
349 # Not fatal. TODO: should it be fatal when it can still process all URLs just because
350 # it can't find the specified keep-urls.txt file?
351 }
352
353 # If keep_urls hash is empty, ensure it is undefined from this point onward
354 # Use if(!keys %hash) to SECURELY test for an empty hash
355 # https://stackoverflow.com/questions/9444915/how-to-check-if-a-hash-is-empty-in-perl
356 #
357 # But may not do: keys $hashref, only: keys %hash.
358 # Unable to work out how to dereference the hashref that is $self->{'keep_urls'},
359 # in order for me to then finally get the keys of the hashmap it refers to
360 # Googled: perl convert reference to hashmap
361 # The way to dereference hashref and get the keys is at https://www.thegeekstuff.com/2010/06/perl-hash-reference/
362 # keys % { $hash_ref };
363 my $hashmap_ref = $self->{'keep_urls'};
364 my %urls_map = %$hashmap_ref;
365 if(!keys %urls_map) {
366 $self->{'keep_urls'} = undef;
367 }
368
369}
370
371# Accept "dump.txt" files (which are in numeric siteID folders),
372# and txt files with numeric siteID, e.g. "01441.txt"
373# if I preprocessed dump.txt files by renaming them this way.
374sub get_default_process_exp {
375 my $self = shift (@_);
376
377 return q^(?i)((dump|\d+)\.txt)$^;
378}
379
380
381sub get_default_split_exp {
382
383 # prev line is either a new line or start of dump.txt
384 # current line should start with url protocol and contain " key: .... http(s)/"
385 # \r\n for msdos eol, \n for unix
386
387 # The regex return value of this method is passed into a call to perl split.
388 # Perl's split(), by default throws away delimiter
389 # Any capturing group that makes up or is part of the delimiter becomes a separate element returned by split
390 # We want to throw away the empty newlines preceding the first line of a record "https? .... key: https?/"
391 # but we want to keep that first line as part of the upcoming record.
392 # - To keep the first line of a record, though it becomes its own split-element, use capture groups in split regex:
393 # https://stackoverflow.com/questions/14907772/split-but-keep-delimiter
394 # - To skip the unwanted empty lines preceding the first line of a record use ?: in front of its capture group
395 # to discard that group:
396 # https://stackoverflow.com/questions/3512471/what-is-a-non-capturing-group-in-regular-expressions
397 # - Next use a positive look-ahead (?= in front of capture group, vs ?! for negative look ahead)
398 # to match but not capture the first line of a record (so the look-ahead matched is retained as the
399 # first line of the next record):
400 # https://stackoverflow.com/questions/14907772/split-but-keep-delimiter
401 # and http://www.regular-expressions.info/lookaround.html
402 # - For non-greedy match, use .*?
403 # https://stackoverflow.com/questions/11898998/how-can-i-write-a-regex-which-matches-non-greedy
404 return q^(?:$|\r?\n\r?\n)(?=https?://.+?\skey:\s+.*?https?/)^;
405
406}
407
408# TODO: Copied method from MARCPlugin.pm and uncommented return statement when encoding = utf8
409# Move to a utility perl file, since code is mostly shared?
410# The bulk of this function is based on read_line in multiread.pm
411# Unable to use read_line original because it expects to get its input
412# from a file. Here the line to be converted is passed in as a string
413
414# TODO:
415# Is this function even applicable to NutchTextDumpPlugin?
416# I get errors in this method when encoding is utf-8 in the decode step.
417# I get warnings/errors somewhere in this file (maybe also at decode) when encoding is windows-1252.
418
419sub to_utf8
420{
421 my $self = shift (@_);
422 my ($encoding, $line) = @_;
423
424 if ($encoding eq "utf8") {
425 # nothing needs to be done
426 return $line;
427 } elsif ($encoding eq "iso_8859_1" || $encoding eq "windows-1252") { # TODO: do this also for windows-1252?
428 # we'll use ascii2utf8() for this as it's faster than going
429 # through convert2unicode()
430 #return &unicode::ascii2utf8 (\$line);
431 $line = &unicode::ascii2utf8 (\$line);
432 } else {
433
434 # everything else uses unicode::convert2unicode
435 $line = &unicode::unicode2utf8 (&unicode::convert2unicode ($encoding, \$line));
436 }
437 # At this point $line is a binary byte string
438 # => turn it into a Unicode aware string, so full
439 # Unicode aware pattern matching can be used.
440 # For instance: 's/\x{0101}//g' or '[[:upper:]]'
441
442 return decode ("utf8", $line);
443}
444
445
446
447# do plugin specific processing of doc_obj
448# This gets done for each record found by SplitTextFile in marc files.
449sub process {
450 my $self = shift (@_);
451 my ($textref, $pluginfo, $base_dir, $file, $metadata, $doc_obj, $gli) = @_;
452
453 # Only load the urls from the keep_urls_file into a hash if we've not done so before.
454 # Although this method is called on each dump.txt file found, and we want to only setup_keep_urls()
455 # once for a collection and only during import and not buildcol, it's best to do the check and setup_keep_urls()
456 # call here, because this subroutine, process(), is only called during import() and not during buildcol.
457 # During buildcol, can_process_this_file() is not called on dump.txt files but on folders (archives folder).
458 # Only if this plugin's called on can_process_this_file() is called on a dump.txt, will this process() be called
459 # on each segment of the dump.txt file
460 # So this is the best spot to ensure we've setup_keep_urls() here, if we haven't already:
461
462 if(!$self->{'keep_urls_processed'}) {
463 $self->setup_keep_urls();
464 }
465
466
467 my $outhandle = $self->{'outhandle'};
468 my $filename = &util::filename_cat($base_dir, $file);
469
470
471 my $cursection = $doc_obj->get_top_section();
472
473 # https://perldoc.perl.org/functions/binmode.html
474 # "To mark FILEHANDLE as UTF-8, use :utf8 or :encoding(UTF-8) . :utf8 just marks the data as UTF-8 without further checking,
475 # while :encoding(UTF-8) checks the data for actually being valid UTF-8. More details can be found in PerlIO::encoding."
476 # https://stackoverflow.com/questions/27801561/turn-off-binmodestdout-utf8-locally
477 # Is there anything useful here:
478 # https://perldoc.perl.org/PerlIO/encoding.html and https://stackoverflow.com/questions/21452621/binmode-encoding-handling-malformed-data
479 # https://stackoverflow.com/questions/1348639/how-can-i-reinitialize-perls-stdin-stdout-stderr
480 # https://metacpan.org/pod/open::layers
481 #binmode(STDERR, ':utf8'); ## FOR DEBUGGING! To avoid "wide character in print" messages, but modifies globally for process!
482
483 #print STDERR "---------------\nDUMP.TXT\n---------\n", $$textref, "\n------------------------\n";
484
485
486 # (1) parse out the metadata of this record
487 my $metaname;
488 my $encoding;
489 my $title_meta;
490
491 my $line_index = 0;
492 my $text_start_index = -1;
493 my @lines = split(/(?:\r?\n)+/, $$textref);
494
495 foreach my $line (@lines) {
496 #$line =~ s@\{@\\{@g; # escape open curly braces for newer perl
497
498 # first line is special and contains the URL (no metaname)
499 # and the inverted URL labelled with metaname "key"
500 if($line =~ m/^https?/ && $line =~ m/\s+key:\s+/) {
501 my @vals = split(/key:/, $line);
502 # get url and key, and trim whitespace simultaneously
503 my $url = &util::trim($vals[0]);
504 my $key = &util::trim($vals[1]);
505
506 # if we have a keep_urls hash, then only process records of whitelisted urls
507 if(defined $self->{'keep_urls'} && !$self->{'keep_urls'}->{$url}) {
508 # URL not whitelisted, so stop processing this record
509 print STDERR "@@@@@@ INFO NutchTextDumpPlugin::process(): discarding record for URL not whitelisted: $url\n"
510 if $self->{'verbosity'} > 3;
511 return 0;
512 } else {
513 print STDERR "@@@@@@ INFO NutchTextDumpPlugin::process(): processing record of whitelisted URL $url...\n"
514 if $self->{'verbosity'} > 3;
515 }
516 $doc_obj->add_utf8_metadata ($cursection, "srcURL", $url);
517 $doc_obj->add_utf8_metadata ($cursection, "key", $key);
518
519
520 # let's also set the domain from the URL, as that will make a
521 # more informative bookshelf label than siteID
522 # For complete domain, keep protocol:// and every non-slash after.
523 # (This avoids requiring presence of subsequent slash)
524 # https://stackoverflow.com/questions/3652527/match-regex-and-assign-results-in-single-line-of-code
525 # Can clean up protocol and www. in List classifier's bookshelf's remove_prefix option
526 # or can build classifier on basicDomain instead.
527
528 my ($domain, $basicDomain) = $url =~ m@(^https?://(?:www\.)?([^/]+)).*@;
529 #my ($domain, $protocol, $basicdomain) = $url =~ m@((^https?)://([^/]+)).*@; # Works
530 $doc_obj->add_utf8_metadata ($cursection, "srcDomain", $domain);
531 $doc_obj->add_utf8_metadata ($cursection, "basicDomain", $basicDomain);
532
533 }
534 # check for full text
535 elsif ($line =~ m/text:start:/) {
536 $text_start_index = $line_index;
537 last; # if we've reached the full text portion, we're past the metadata portion of this record
538 }
539 elsif($line =~ m/^[^:]+:.+$/) { # look for meta #elsif($line =~ m/^[^:]+:[^:]+$/) { # won't allow protocol://url in metavalue
540 my @metakeyvalues = split(/:/, $line); # split on first :
541
542 my $metaname = shift(@metakeyvalues);
543 my $metavalue = join("", @metakeyvalues);
544
545 # skip "metadata _rs_" and "metadata _csh_" as these contain illegible characters for values
546 if($metaname !~ m/metadata\s+_(rs|csh)_/) {
547
548 # trim whitespace
549 $metaname = &util::trim($metaname);
550 $metavalue = &util::trim($metavalue);
551
552 if($metaname eq "title") { # TODO: what to do about "title: null" cases?
553 ##print STDERR "@@@@ Found title: $metavalue\n";
554 #$metaname = "Title"; # will set "title" as "Title" metadata instead
555 # TODO: treat title metadata specially by using character encoding to store correctly?
556
557 # Won't add Title metadata to docObj until after all meta is processed,
558 # when we'll know encoding and can process title meta
559 $title_meta = $metavalue;
560 $metavalue = ""; # will force ex.Title metadata to be added AFTER for loop
561 }
562 elsif($metaname =~ m/CharEncodingForConversion/) { # TODO: or look for "OriginalCharEncoding"?
563 ##print STDERR "@@@@ Found encoding: $metavalue\n";
564 $encoding = $metavalue; # TODO: should we use this to interpret the text and title in the correct encoding and convert to utf-8?
565
566 if($encoding eq "utf-8") {
567 $encoding = "utf8"; # method to_utf8() recognises "utf8" not "utf-8"
568 } else {
569 my $srcURL = $doc_obj->get_metadata_element($cursection, "srcURL");
570 print STDERR "@@@@@@ WARNING NutchTextDumpPlugin::process(): Record's Nutch-assigned CharEncodingForConversion was not utf-8 but $encoding\n\tfor record: $srcURL\n";
571 }
572
573 }
574
575 # move occurrences of "marker " or "metadata " strings at start of metaname to end
576 #$metaname =~ s/^(marker|metadata)\s+(.*)$/$2$1/;
577 # remove "marker " or "metadata " strings from start of metaname
578 $metaname =~ s/^(marker|metadata)\s+//;
579 # remove underscores and all remaining spaces in metaname
580 $metaname =~ s/[ _]//g;
581
582 # add meta to docObject if both metaname and metavalue are non-empty strings
583 if($metaname ne "" && $metavalue ne "") {
584 # when no namespace is provided as here, adds as ex. meta.
585 # Don't explicitly prefix ex., as things becomes convoluted when retrieving meta
586 $doc_obj->add_utf8_metadata ($cursection, $metaname, $metavalue);
587 #print STDERR "Added meta |$metaname| = |$metavalue|\n"; #if $metaname =~ m/ProtocolStatus/i;
588 }
589
590 }
591 } elsif ($line !~ m/^\s*$/) { # Not expecting any other type of non-empty line (or even empty lines)
592 print STDERR "NutchTextDump line not recognised as URL meta, other metadata or text content:\n\t$line\n";
593 }
594
595 $line_index++;
596 }
597
598
599 # Add fileFormat as the metadata
600 $doc_obj->add_metadata($cursection, "FileFormat", "NutchDumpTxt");
601
602 # Correct title metadata using encoding, if we have $encoding at last
603 # https://stackoverflow.com/questions/12994100/perl-encode-pm-cannot-decode-string-with-wide-character
604 # Error message: "Perl Encode.pm cannot decode string with wide character"
605 # "That error message is saying that you have passed in a string that has already been decoded
606 # (and contains characters above codepoint 255). You can't decode it again."
607 if($title_meta && $title_meta ne "" && $title_meta ne "null") {
608 #$title_meta = $self->to_utf8($encoding, $title_meta) if ($encoding);
609 } else { # if we have "null" as title metadata, set it to the record URL?
610 my $srcURL = $doc_obj->get_metadata_element($cursection, "srcURL");
611 my ($basicURL) = $srcURL =~ m@^https?://(?:www\.)?(.*)$@; # use basicURL for title instead of srcURL, else many docs get classified under "Htt" bucket for https
612 if(defined $srcURL) {
613 print STDERR "@@@@ null/empty title to be replaced with ".$basicURL."\n"
614 if $self->{'verbosity'} > 3;
615 $title_meta = $basicURL;
616 }
617 }
618 $doc_obj->add_utf8_metadata ($cursection, "Title", $title_meta);
619
620
621 # When importOption OIDtype = dirname, the base_OID will be that dirname
622 # which was crafted to be the siteID. However, because our siteID is all numeric,
623 # a D gets prepended to create baseOID. Remove the starting 'D' to get actual siteID.
624 my $siteID = $self->get_siteID($doc_obj, $file);
625 #print STDERR "BASE OID: " . $siteID . "\n";
626 $siteID =~ s/^D//;
627 $doc_obj->add_utf8_metadata ($cursection, "siteID", $siteID);
628
629
630 # (2) parse out text of this record
631 # if($text_start_index != -1 && pop(@lines) =~ m/text:end:/) { # we only have text content if there were "text:start:" and "text:end:" markers.
632 # # TODO: are we guaranteed popped line is text:end: and not empty/newline?
633 # @lines = splice(@lines,0,$text_start_index+1); # just keep every line AFTER text:start:, have already removed (popped) "text:end:"
634
635 # # glue together remaining lines, if there are any, into textref
636 # # https://stackoverflow.com/questions/7406807/find-size-of-an-array-in-perl
637 # if(scalar (@lines) > 0) {
638 # # TODO: do anything with $encoding to convert line to utf-8?
639 # foreach my $line (@lines) {
640 # $line = $self->to_utf8($encoding, $line) if $encoding; #if $encoding ne "utf-8";
641 # $$textref .= $line."\n";
642 # }
643 # }
644 # $$textref = "<pre>\n".$$textref."</pre>";
645 # } else {
646 # print STDERR "WARNING: NutchTextDumpPlugin::process: had found a text start marker but not text end marker.\n";
647 # $$textref = "<pre></pre>";
648 # }
649
650 # (2) parse out text of this record
651 my $no_text = 1;
652 if($text_start_index != -1) { # had found a "text:start:" marker, so we should have text content for this record
653
654 if($$textref =~ m/text:start:\r?\n(.*?)\r?\ntext:end:/) {
655 $$textref = $1;
656 if($$textref !~ m/^\s*$/) {
657 #$$textref = $self->to_utf8($encoding, $$textref) if ($encoding);
658 $$textref = "<pre>\n".$$textref."\n</pre>";
659 $no_text = 0;
660 }
661 }
662 }
663 if($no_text) {
664 $$textref = "<pre></pre>";
665 }
666
667 # Debugging
668 # To avoid "wide character in print" messages for debugging, set binmode of handle to utf8/encoding
669 # https://stackoverflow.com/questions/15210532/use-of-use-utf8-gives-me-wide-character-in-print
670 # if ($self->{'verbosity'} > 3) {
671 # if($encoding && $encoding eq "utf8") {
672 # binmode STDERR, ':utf8';
673 # }
674
675 # print STDERR "TITLE: $title_meta\n";
676 # print STDERR "ENCODING = $encoding\n" if $encoding;
677 # #print STDERR "---------------\nTEXT CONTENT\n---------\n", $$textref, "\n------------------------\n";
678 # }
679
680
681 $doc_obj->add_utf8_text($cursection, $$textref);
682
683 return 1;
684}
685
686# returns siteID when file in import of form siteID.txt
687# returns siteID when import contains siteID/dump.txt (as happens when OIDtype=dirname)
688# Returns whatever baseOID in other situations, not sure if meaningful, but shouldn't have
689# passed can_process_this_file() test for anything other than siteID/dump.txt and siteID.txt anyway
690sub get_siteID {
691 my $self = shift(@_);
692 my ($doc_obj, $file) = @_;
693
694 my $siteID;
695 if ($file =~ /(\d+).txt/) {
696 # file name without extension is site ID, e.g. 00001.txt
697 $siteID = $1;
698 }
699 else { # if($doc_obj->{'OIDtype'} eq "dirname") or even otherwise, just use baseOID
700 # baseOID is the same as site ID when OIDtype is configured to dirname because docs are stored as 00001/dump.txt
701 # siteID has no real meaning in other cases
702 $siteID = $self->{'dirname_siteID'} || $self->get_base_OID($doc_obj);
703
704 }
705 if(!$self->{'siteID'} || $siteID ne $self->{'siteID'}) {
706 $self->{'siteID'} = $siteID;
707 }
708 return $self->{'siteID'};
709}
710
711
712# SplitTextFile::get_base_OID() has the side-effect of calling SUPER::add_OID()
713# in order to initialise segment IDs.
714# This then ultimately results in calling util::tidy_up_OID() to print warning messages
715# about siteIDs forming all-numeric baseOIDs that require the D prefix prepended.
716# In cases where site ID is the same as baseOID and is needed to set siteID meta, we want to avoid
717# the warning messages but don't want to prevent the important side-effects of SplitTextFile::get_base_OID()
718# So instead of overriding this method to calculate and store baseOID the first time and return
719# the stored value subsequent times (which has the undesirable result that the side-effect from
720# ALWAYS calling super's get_base_OID() even when there's a stored value), we just always store
721# the return value before returning it. Next, we push the check for first testing for a stored value
722# to use, else forcing it to be computed by calling this get_base_OID(), onto a separate function that
723# calls this one, get_siteID(). Problem solved.
724sub get_base_OID {
725 my $self = shift(@_);
726 my ($doc_obj) = @_;
727
728 #if(!defined $self->{'dirname_siteID'}) { # DON'T DO THIS: loses essential side-effect of always calling super's get_base_OID()
729 # this method is overridden, so it's not just called by this NutchTextDumpPlugin
730
731 $self->{'dirname_siteID'} = $self->SUPER::get_base_OID($doc_obj); # store for NutchTextDumpPlugin's internal use
732 #}
733 return $self->{'dirname_siteID'}; # return superclass return value as always
734}
7351;
Note: See TracBrowser for help on using the repository browser.