root/other-projects/nightly-tasks/diffcol/trunk/model-collect/PDFBox/log/build_log.1373002980103.txt @ 27953

Revision 27953, 7.6 KB (checked in by ak19, 7 years ago)

Redid PDFBox collection without 2nd pdf file.

Line 
1s
2Command: perl -S /research/ak19/GS2bin_5July2013/bin/script/full-import.pl -gli -language en -collectdir /research/ak19/GS2bin_5July2013/collect PDFBox
3import.pl> Detected -sortmeta. To effect the stipulated sorting by metadata (or OID) remember this option should be paired with either the '-reversesort' or '-sort' option to ArchivesInfPlugin.
4import.pl> Removing current contents of the archives directory...
5import.pl> Removing contents of the collection "tmp" directory...
6import.pl> Global file scan checking directory: /research/ak19/GS2bin_5July2013/collect/PDFBox/import
7import.pl> EmbeddedMetadataPlugin: processing A9-access-best-practices.pdf
8import.pl>  Extracted 29 pieces of metadata from /research/ak19/GS2bin_5July2013/collect/PDFBox/import/A9-access-best-practices.pdf EXIF block
9import.pl> EmbeddedMetadataPlugin: processing Install-en.pdf
10import.pl>  Extracted 26 pieces of metadata from /research/ak19/GS2bin_5July2013/collect/PDFBox/import/Install-en.pdf EXIF block
11import.pl> EmbeddedMetadataPlugin: processing jpeg2000.pdf
12import.pl>  Extracted 12 pieces of metadata from /research/ak19/GS2bin_5July2013/collect/PDFBox/import/jpeg2000.pdf EXIF block
13import.pl> MetadataXMLPlugin: processing metadata.xml
14import.pl> EmbeddedMetadataPlugin: processing pdf03.pdf
15import.pl>  Extracted 16 pieces of metadata from /research/ak19/GS2bin_5July2013/collect/PDFBox/import/pdf03.pdf EXIF block
16import.pl> Converting A9-access-best-practices.pdf to html format
17import.pl> calling cmd "/usr/bin/perl" -S gsConvert.pl -verbose 2 -pdf_zoom 2 -errlog "/research/ak19/GS2bin_5July2013/collect/PDFBox/tmp/1373002982/err.log" -output html "/research/ak19/GS2bin_5July2013/collect/PDFBox/tmp/1373002982/A9-access-best-practices.pdf"
18import.pl> Error executing pdftohtml.pl
19import.pl> pdftohtml error log:
20import.pl> Error: PDF version 1.7 -- xpdf supports version 1.4 (continuing anyway)
21import.pl> Error (0): PDF file is damaged - attempting to reconstruct xref table...
22import.pl> Error: Couldn't find trailer dictionary
23import.pl> Error: Couldn't read xref table
24import.pl> Could not convert A9-access-best-practices.pdf to html format
25import.pl> Error: PDF version 1.7 -- xpdf supports version 1.4 (continuing anyway)
26import.pl> Error (0): PDF file is damaged - attempting to reconstruct xref table...
27import.pl> Error: Couldn't find trailer dictionary
28import.pl> Error: Couldn't read xref table
29import.pl> WARNING: No plugin could process A9-access-best-practices.pdf
30import.pl> Converting Install-en.pdf to html format
31import.pl> calling cmd "/usr/bin/perl" -S gsConvert.pl -verbose 2 -pdf_zoom 2 -errlog "/research/ak19/GS2bin_5July2013/collect/PDFBox/tmp/1373002983/err.log" -output html "/research/ak19/GS2bin_5July2013/collect/PDFBox/tmp/1373002983/Install-en.pdf"
32import.pl> HTMLPlugin processing /research/ak19/GS2bin_5July2013/collect/PDFBox/tmp/1373002983/Install-en.html
33import.pl> Converting jpeg2000.pdf to html format
34import.pl> calling cmd "/usr/bin/perl" -S gsConvert.pl -verbose 2 -pdf_zoom 2 -errlog "/research/ak19/GS2bin_5July2013/collect/PDFBox/tmp/1373002983/err.log" -output html "/research/ak19/GS2bin_5July2013/collect/PDFBox/tmp/1373002983/jpeg2000.pdf"
35import.pl> HTMLPlugin processing /research/ak19/GS2bin_5July2013/collect/PDFBox/tmp/1373002983/jpeg2000.html
36import.pl> Converting pdf03.pdf to html format
37import.pl> calling cmd "/usr/bin/perl" -S gsConvert.pl -verbose 2 -pdf_zoom 2 -errlog "/research/ak19/GS2bin_5July2013/collect/PDFBox/tmp/1373002983/err.log" -output html "/research/ak19/GS2bin_5July2013/collect/PDFBox/tmp/1373002983/pdf03.pdf"
38import.pl> HTMLPlugin processing /research/ak19/GS2bin_5July2013/collect/PDFBox/tmp/1373002983/pdf03.html
39import.pl> *********************************************
40import.pl> Import complete
41import.pl> *********************************************
42import.pl> * 4 documents were considered for processing
43import.pl> * 3 were processed and included in the collection
44import.pl> * 1 was rejected
45import.pl>  See /research/ak19/GS2bin_5July2013/collect/PDFBox/etc/fail.log for a list of unrecognised and/or rejected documents
46import.pl> Command complete.
47import.pl> Extracting new metadata from archive files.
48import.pl> Archived metadata extraction complete.
49Command: perl -S /research/ak19/GS2bin_5July2013/bin/script/full-buildcol.pl -gli -language en -collectdir /research/ak19/GS2bin_5July2013/collect PDFBox
50buildcol.pl> *** creating the compressed text
51buildcol.pl>     collecting text statistics (mgpp_passes -T1)
52buildcol.pl> ArchivesInfPlugin: processing /research/ak19/GS2bin_5July2013/collect/PDFBox/archives/archiveinf-doc.gdb
53buildcol.pl> GreenstoneXMLPlugin: processing HASH010b5ca7.dir/doc.xml
54buildcol.pl> GreenstoneXMLPlugin: processing HASH019c5dca.dir/doc.xml
55buildcol.pl> GreenstoneXMLPlugin: processing HASH11e4ec6d.dir/doc.xml
56buildcol.pl> Stats (Compressing text from text)
57buildcol.pl> Total bytes in collection: 198714
58buildcol.pl> Total bytes in text: 198717
59buildcol.pl>     creating the compression dictionary
60buildcol.pl>     compressing the text (mgpp_passes -T2)
61buildcol.pl> ArchivesInfPlugin: processing /research/ak19/GS2bin_5July2013/collect/PDFBox/archives/archiveinf-doc.gdb
62buildcol.pl> GreenstoneXMLPlugin: processing HASH010b5ca7.dir/doc.xml
63buildcol.pl> GreenstoneXMLPlugin: processing HASH019c5dca.dir/doc.xml
64buildcol.pl> GreenstoneXMLPlugin: processing HASH11e4ec6d.dir/doc.xml
65buildcol.pl> Stats (Compressing text from text)
66buildcol.pl> Total bytes in collection: 198714
67buildcol.pl> Total bytes in text: 198717
68buildcol.pl> *** building index text;dc.Title,ex.dc.Title,Title;Source; in subdirectory idx
69buildcol.pl>     creating index dictionary (mgpp_passes -I1)
70buildcol.pl> ArchivesInfPlugin: processing /research/ak19/GS2bin_5July2013/collect/PDFBox/archives/archiveinf-doc.gdb
71buildcol.pl> GreenstoneXMLPlugin: processing HASH010b5ca7.dir/doc.xml
72buildcol.pl> GreenstoneXMLPlugin: processing HASH019c5dca.dir/doc.xml
73buildcol.pl> GreenstoneXMLPlugin: processing HASH11e4ec6d.dir/doc.xml
74buildcol.pl> Stats (Creating index text;dc.Title,ex.dc.Title,Title;Source;)
75buildcol.pl> Total bytes in collection: 198714
76buildcol.pl> Total bytes in text;dc.Title,ex.dc.Title,Title;Source;: 174971
77buildcol.pl>     inverting the text (mgpp_passes -I2)
78buildcol.pl> ArchivesInfPlugin: processing /research/ak19/GS2bin_5July2013/collect/PDFBox/archives/archiveinf-doc.gdb
79buildcol.pl> GreenstoneXMLPlugin: processing HASH010b5ca7.dir/doc.xml
80buildcol.pl> GreenstoneXMLPlugin: processing HASH019c5dca.dir/doc.xml
81buildcol.pl> GreenstoneXMLPlugin: processing HASH11e4ec6d.dir/doc.xml
82buildcol.pl> Stats (Creating index text;dc.Title,ex.dc.Title,Title;Source;)
83buildcol.pl> Total bytes in collection: 198714
84buildcol.pl> Total bytes in text;dc.Title,ex.dc.Title,Title;Source;: 174971
85buildcol.pl>     create the weights file
86buildcol.pl>     creating 'on-disk' stemmed dictionary
87buildcol.pl>     creating stem indexes
88buildcol.pl> BuildDir: /research/ak19/GS2bin_5July2013/collect/PDFBox/building
89buildcol.pl> *** creating the info database and processing associated files
90buildcol.pl> ArchivesInfPlugin: processing /research/ak19/GS2bin_5July2013/collect/PDFBox/archives/archiveinf-doc.gdb
91buildcol.pl> GreenstoneXMLPlugin: processing HASH010b5ca7.dir/doc.xml
92buildcol.pl> GreenstoneXMLPlugin: processing HASH019c5dca.dir/doc.xml
93buildcol.pl> GreenstoneXMLPlugin: processing HASH11e4ec6d.dir/doc.xml
94buildcol.pl> *** outputting information for classifier: CL1
95buildcol.pl> *** outputting information for classifier: CL2
96buildcol.pl> *** outputting information for classifier: oai
97buildcol.pl> *** creating auxiliary files
98buildcol.pl> Copying rss-items.rdf file from archives to building (eventually to index)
99buildcol.pl> Command complete.
Note: See TracBrowser for help on using the browser.