# # Resource bundle description # Language.code:en Language.name:English OutputEncoding.unix:iso_8859_1 OutputEncoding.windows:iso_8859_1 # # Common output messages # common.cannot_create_file:ERROR: Can't create file %s common.cannot_find_cfg_file:ERROR: Can't find the configuration file %s common.cannot_open:ERROR: Can't open %s common.cannot_open_fail_log:ERROR: Can't open fail log %s common.cannot_open_output_file:ERROR: Can't open output file %s common.cannot_read:ERROR: Can't read %s common.cannot_read_file:ERROR: Can't read file %s common.general_options:general options (for %s) common.must_be_implemented:function must be implemented in sub-class common.options:options common.processing:processing common.specific_options:specific options common.usage:Usage common.info:info common.invalid_options:Invalid arguments: %s # # Script option descriptions and output messages # scripts.language:Language to display option descriptions in (eg. 'en_US' specifies American English). Requires translations of the option descriptions to exist in the perllib/strings_language-code.rb file. scripts.xml:Produces the information in an XML form, without 'pretty' comments but with much more detail. scripts.listall:Lists all items known about. scripts.describeall:Display options for all items known about scripts.both_old_options:WARNING: -removeold was specified with -keepold or -incremental, defaulting to -removeold. Current contents of %s directory will be deleted. scripts.no_old_options:WARNING: None of -removeold, -keepold or -incremental were specified, defaulting to -removeold. Current contents of %s directory will be deleted. # -- buildcol.pl -- buildcol.archivedir:Where the archives live. buildcol.builddir:Where to put the built indexes. buildcol.cachedir:Collection will be temporarily built here before being copied to the build directory. buildcol.cannot_open_cfg_file:WARNING: Can't open config file for updating: %s buildcol.collectdir:The path of the "collect" directory. buildcol.copying_back_cached_build:Copying back the cached build buildcol.create_images:Attempt to create default images for new collection. This relies on the Gimp being installed along with relevant perl modules to allow scripting from perl. buildcol.debug:Print output to STDOUT. buildcol.desc:PERL script used to build a greenstone collection from GA documents. buildcol.faillog:Fail log filename. This log receives the filenames of any files which fail to be processed. buildcol.index:Index to build (will build all in config file if not set). buildcol.incremental:Only index documents which have not been previously indexed. Implies -keepold. Relies on the lucene indexer. buildcol.incremental_dlc:Truely incremental update of the GDBM database. Only works for hierarchy classifiers. buildcol.keepold:Will not destroy the current contents of the building directory. buildcol.maxdocs:Maximum number of documents to build. buildcol.maxnumeric:The maximum nuber of digits a 'word' can have in the index dictionary. Large numbers are split into several words for indexing. For example, if maxnumeric is 4, "1342663" will be split into "1342" and "663". buildcol.mode:The parts of the building process to carry out. buildcol.mode.all:Do everything. buildcol.mode.build_index:Just index the text. buildcol.mode.compress_text:Just compress the text. buildcol.mode.infodb:Just build the metadata database. buildcol.no_default_images:Default images will not be generated. buildcol.no_image_script:WARNING: Image making script could not be found: %s buildcol.no_strip_html:Do not strip the html tags from the indexed text (only used for mgpp collections). buildcol.no_text:Don't store compressed text. This option is useful for minimizing the size of the built indexes if you intend always to display the original documents at run time (i.e. you won't be able to retrieve the compressed text version). buildcol.sections_index_document_metadata:Index document level metadata at section level buildcol.sections_index_document_metadata.never:Don't index any document metadata at section level. buildcol.sections_index_document_metadata.always:Add all specified document level metadata even if section level metadata of that name exists. buildcol.sections_index_document_metadata.unless_section_metadata_exists:Only add document level metadata if no section level metadata of that name exists. buildcol.out:Filename or handle to print output status to. buildcol.params:[options] collection-name buildcol.remove_empty_classifications:Hide empty classifiers and classification nodes (those that contain no documents). buildcol.removeold:Will remove the old contents of the building directory. buildcol.unlinked_col_images:Collection images may not be linked correctly. buildcol.unknown_mode:Unknown mode: %s buildcol.updating_archive_cache:Updating archive cache buildcol.verbosity:Controls the quantity of output. 0=none, 3=lots. # -- classinfo.pl -- classinfo.collection:Giving a collection name will make classinfo.pl look in collect/collection-name/perllib/classify first. If the classifier is not found there it will look in the general perllib/classify directory. classinfo.desc:Prints information about a classifier. classinfo.general_options:General options are inherited from parent classes of the classifier. classinfo.info:info classinfo.no_classifier_name:ERROR: You must provide a classifier name. classinfo.option_types:Classifiers may take two types of options classinfo.params:[options] classifier-name classinfo.passing_options:Options may be passed to any classifier by including them in your collect.cfg configuration file. classinfo.specific_options:Specific options are defined within the classifier itself, and are available only to this particular classifier. # -- downloadfrom.pl -- downloadfrom.cache_dir:The location of the cache directory downloadfrom.desc:Downloads files from an external server downloadfrom.download_mode:The type of server to download from downloadfrom.download_mode.Web:HTTP downloadfrom.download_mode.MediaWiki:MediaWiki website downloadfrom.download_mode.OAI: Open Archives Initiative downloadfrom.download_mode.z3950:z3950 server downloadfrom.download_mode.SRW:SearchRetrieve Webservice downloadfrom.incorrect_mode:download_mode parameter was incorrect. downloadfrom.info:Print information about the server, rather than downloading downloadfrom.params:[general options] [specific download options] # -- downloadinfo.pl -- downloadinfo.desc:Prints information about a download module downloadinfo.collection:Giving a collection name will make downloadinfo.pl look in collect/collection-name/perllib/downloaders first. If the module is not found there it will look in the general perllib/downloaders directory. downloadinfo.params:[options] [download-module] downloadinfo.general_options:General options are inherited from parent classes of the download modules. downloadinfo.specific_options:Specific options are defined within the download module itself, and are available only to this particular downloader. downloadinfo.option_types:Download modules may take two types of options # -- explode_metadata_database.pl -- explode.desc:Explode a metadata database explode.document_field:The metadata element specifying the file name of documents to obtain and include in the collection. explode.document_prefix:A prefix for the document locations (for use with the document_field option). explode.document_suffix:A suffix for the document locations (for use with the document_field option). explode.encoding:Encoding to use when reading in the database file explode.metadata_set:Metadata set (namespace) to export all metadata as explode.plugin: Plugin to use for exploding explode.params: [options] filename explode.records_per_folder: The number of records to put in each subfolder. # -- exportcol.pl -- exportcol.out:Filename or handle to print output status to. exportcol.cddir:The name of the directory that the CD contents are exported to. exportcol.cdname:The name of the CD-ROM -- this is what will appear in the start menu once the CD-ROM is installed. exportcol.desc:PERL script used to export one or more collections to a Windows CD-ROM. exportcol.noinstall:Create a CD-ROM where the library runs directly off the CD-ROM and nothing is installed on the host computer. exportcol.params:[options] collection-name1 collection-name2 ... exportcol.coll_not_found:Ignoring invalid collection %s: collection not found at %s. exportcol.coll_dirs_not_found:Ignoring invalid collection %s: one of the following directories not found: exportcol.fail:exportcol.pl failed: exportcol.no_valid_colls:No valid collections specified to export. exportcol.couldnt_create_dir:Could not create directory %s. exportcol.couldnt_create_file:Could not create %s. exportcol.instructions:To create a self-installing Windows CD-ROM, write the contents of this folder out to a CD-ROM. exportcol.non_exist_files:One or more of the following necessary files and directories does not exist: exportcol.success:exportcol.pl succeeded: exportcol.output_dir:The exported collections (%s) are in %s. exportcol.export_coll_not_installed:The Export to CD-ROM functionality has not been installed. # -- import.pl -- import.archivedir:Where the converted material ends up. import.manifest:An XML file that details what files are to be imported. Used instead of recursively descending the import folder, typically for incremental building. import.cannot_open_stats_file:WARNING: Couldn't open stats file %s. import.cannot_open_fail_log:ERROR: Couldn't open fail log %s import.cannot_sort:WARNING: import.pl cannot sort documents when groupsize > 1. sortmeta option will be ignored. import.collectdir:The path of the "collect" directory. import.complete:Import complete import.debug:Print imported text to STDOUT (for GA importing) import.desc:PERL script used to import files into a GA format ready for building. import.faillog:Fail log filename. This log receives the filenames of any files which fail to be processed. import.groupsize:Number of import documents to group into one XML file. import.gzip:Use gzip to compress resulting xml documents (don't forget to include ZIPPlug in your plugin list when building from compressed documents). import.importdir:Where the original material lives. import.incremental:Only import documents which are newer (by timestamp) than the current archives files. Implies -keepold. import.keepold:Will not destroy the current contents of the archives directory. import.maxdocs:Maximum number of documents to import. import.no_import_dir:Error: Import dir (%s) not found. import.no_plugins_loaded:ERROR: No plugins loaded. import.OIDtype:The method to use when generating unique identifiers for each document. import.OIDtype.hash:Hash the contents of the file. Document identifiers will be the same every time the collection is imported. import.OIDtype.incremental:Use a simple document count. Significantly faster than "hash", but does not assign the same identifier to the same document content, and further documents cannot be added to existing archives. import.OIDtype.assigned:Use the metadata value given by the OIDmetadata option (preceded by 'D'); if unspecified, for a particular document a hash is used instead. These identifiers should be unique. import.OIDtype.dirname:Use the parent directory name (preceded by 'J'). There should only be one document per directory, and directory names should be unique. E.g. import/b13as/h15ef/page.html will get an identifier of Jh15ef. import.OIDmetadata:Specifies the metadata element that hold's the document's unique identifier, for use with -OIDtype=assigned. import.saveas:This is to decide the archives format to be generated. The default setting is to GA. import.saveas.GA:Will generate Greenstone Archive format. import.saveas.METS:Will generate METS format. import.out:Filename or handle to print output status to. import.params:[options] collection-name import.removeold:Will remove the old contents of the archives directory. import.removing_archives:Removing current contents of the archives directory... import.removing_tmpdir:Removing contents of the collection "tmp" directory... import.sortmeta:Sort documents alphabetically by metadata for building. Search results for boolean queries will be displayed in this order. This will be disabled if groupsize > 1. May be a commma separated list to sort by more than one metadata value. import.statsfile:Filename or handle to print import statistics to. import.stats_backup:Will print stats to STDERR instead. import.verbosity:Controls the quantity of output. 0=none, 3=lots. # -- export.pl -- export.exportdir:Where the export material ends up. export.cannot_open_stats_file:WARNING: Couldn't open stats file %s. export.cannot_open_fail_log:ERROR: Couldn't open fail log %s export.cannot_sort:WARNING: export.pl cannot sort documents when groupsize > 1. sortmeta option will be ignored. export.collectdir:The path of the "collect" directory. export.complete:Export complete export.debug:Print exported text to STDOUT (for GA exporting) export.desc:PERL script used to export files in a Greenstone collection to another format. export.faillog:Fail log filename. This log receives the filenames of any files which fail to be processed. (Default: collectdir/collname/etc/fail.log) export.groupsize:Number of documents to group into one XML file. export.gzip:Use gzip to compress resulting xml documents (don't forget to include ZIPPlug in your plugin list when building from compressed documents). export.importdir:Where the original material lives. export.keepold:Will not destroy the current contents of the export directory. export.maxdocs:Maximum number of documents to export. export.listall:List all the saveas formats export.saveas:Format to export documents as. export.saveas.DSpace:DSpace Archive format. export.saveas.METS:METS format using the Greenstone profile. export.saveas.GA:Greenstone Archive format export.saveas.MARCXML:MARC XML format (an XML version of MARC 21) export.saveas_version:Currently only valid with 'saveas METS', options are 'greenstone', for Greenstone METS, or 'fedora', for Fedora METS. export.out:Filename or handle to print output status to. export.params:[options] collection-name1, collection-name2... export.removeold:Will remove the old contents of the export directory. export.removing_export:Removing current contents of the export directory... export.sortmeta:Sort documents alphabetically by metadata for building. This will be disabled if groupsize > 1. export.statsfile:Filename or handle to print export statistics to. export.stats_backup:Will print stats to STDERR instead. export.verbosity:Controls the quantity of output. 0=none, 3=lots. # -- mkcol.pl -- mkcol.about:The about text for the collection. mkcol.bad_name_cvs:ERROR: No collection can be named CVS as this may interfere with directories created by the CVS versioning system. mkcol.bad_name_modelcol:ERROR: No collection can be named modelcol as this is the name of the model collection. mkcol.cannot_find_modelcol:ERROR: Cannot find the model collection %s mkcol.col_already_exists:ERROR: This collection already exists. mkcol.collectdir:Directory where new collection will be created. mkcol.creating_col:Creating the collection %s mkcol.creator:The collection creator's e-mail address. mkcol.creator_undefined:ERROR: The creator was not defined. This variable is needed to recognise duplicate collection names. mkcol.desc:PERL script used to create the directory structure for a new Greenstone collection. mkcol.doing_replacements:doing replacements for %s mkcol.long_colname:ERROR: The collection name must be less than 8 characters so compatibility with earlier filesystems can be maintained. mkcol.maintainer:The collection maintainer's email address (if different from the creator). mkcol.no_collectdir:ERROR: The collect dir doesn't exist: %s mkcol.no_colname:ERROR: No collection name was specified. mkcol.optionfile:Get options from file, useful on systems where long command lines may cause problems. mkcol.params:[options] collection-name mkcol.plugin:Perl plugin module to use (there may be multiple plugin entries). mkcol.public:If this collection has anonymous access. mkcol.public.true:Collection is public mkcol.public.false:Collection is private mkcol.quiet:Operate quietly. mkcol.success:The new collection was created successfully at %s mkcol.title:The title of the collection. mkcol.win31compat:Whether or not the named collection directory must conform to Windows 3.1 file conventions or not (i.e. 8 characters long). mkcol.win31compat.true:Directory name 8 characters or less mkcol.win31compat.false:Directory name any length # -- pluginfo.pl -- pluginfo.collection:Giving a collection name will make pluginfo.pl look in collect/collection-name/perllib/plugins first. If the plugin is not found there it will look in the general perllib/plugins directory. pluginfo.desc:Prints information about a plugin. pluginfo.general_options:General options are inherited from parent classes of the plugin. pluginfo.info:info pluginfo.no_plugin_name:ERROR: You must provide a plugin name. pluginfo.option_types:Plugins may take two types of options pluginfo.params:[options] plugin-name pluginfo.passing_options:Options may be passed to any plugin by including them in your collect.cfg configuration file. pluginfo.specific_options:Specific options are defined within the plugin itself, and are available only to this particular plugin. # -- plugoutinfo.pl -- plugoutinfo.collection:Giving a collection name will make plugoutinfo.pl look in collect/collection-name/perllib/plugouts first. If the plugout is not found there it will look in the general perllib/plugouts directory. plugoutinfo.desc:Prints information about a plugout. plugoutinfo.general_options:General options are inherited from parent classes of the plugout. plugoutinfo.info:info plugoutinfo.no_plugout_name:ERROR: You must provide a plugout name. plugoutinfo.option_types:Plugouts may take two types of options plugoutinfo.params:[options] plugout-name plugoutinfo.passing_options:Options may be passed to any plugout by including them in your collect.cfg configuration file. plugoutinfo.specific_options:Specific options are defined within the plugout itself, and are available only to this particular plugout. # # Plugout option descriptions # MARCXMLPlugout.desc:MARC xml format. METSPlugout.desc:METS format using the Greenstone profile. BasPlugout.desc:Base class for all the export plugouts. GAPlugout.desc:Greenstone Archive format. DSpacePlugout.desc::DSpace Archive format. METSPlugout.version::Currently only valid with 'saveas METS', options are 'greenstone', for Greenstone METS, or 'fedora', for Fedora METS. BasPlugout.group_size:Number of documents to group into one XML file. BasPlugout.output_info:the reference to an arcinfo object used to store information about the archives. BasPlugout.output_handle: the file descriptor used to send output information BasPlugout.verbosity:Controls the quantity of output. 0=none, 3=lots. BasPlugout.gzip_output:Use gzip to compress resulting xml documents (don't forget to include ZIPPlug in your plugin list when building from compressed documents). BasPlugout.xslt_file:Transform a document with the XSLT in the named file. MARCXMLPlugout.group:Output the marc xml records into a single file. MARCXMLPlugout.mapping_file:Use the named mapping file for the transformation. METSPlugout.xslt_txt:Transform a mets's doctxt.xml with the XSLT in the named file. METSPlugout.xslt_mets:Transform a mets's docmets.xml with the XSLT in the named file. # # Classifier option descriptions # AllList.desc:Creates a single list of all documents. Use by the oaiserver. AZCompactList.allvalues:Use all metadata values found. AZCompactList.desc:Classifier plugin for sorting alphabetically (on a-zA-Z0-9). Produces a horizontal A-Z list, then a vertical list containing documents, or bookshelves for documents with common metadata. AZCompactList.doclevel:Level to process document at. AZCompactList.doclevel.top:Whole document. AZCompactList.doclevel.section:By sections. AZCompactList.firstvalueonly:Use only the first metadata value found. AZCompactList.freqsort:Sort by node frequency rather than alpha-numeric. AZCompactList.maxcompact:Maximum number of documents to be displayed per page. AZCompactList.metadata:A single Metadata field, or a comma separated list of Metadata fields, used for classification. If a list is specified, the first metadata type that has values will be used. May be used in conjunction with the -firstvalueonly and -allvalues flags, to select only the first value, or all metadata values from the list. AZCompactList.mincompact:Minimum number of documents to be displayed per page. AZCompactList.mingroup:The smallest value that will cause a group in the hierarchy to form. AZCompactList.minnesting:The smallest value that will cause a list to be converted into a nested list. AZCompactList.recopt:Used in nested metadata such as -metadata Year/Organisation. AZCompactList.sort:Metadata field to sort the leaf nodes by. AZCompactSectionList.desc:Variation on AZCompactList that classifies sections rather than documents. Entries are sorted by section-level metadata. AZList.desc:Classifier plugin for sorting alphabetically (on a-zA-Z0-9). Produces a horizontal A-Z list, with documents listed underneath. AZList.metadata:A single Metadata field or a comma separated list of Metadata fields used for classification. Following the order indicated by the list, the first field that contains a Metadata value will be used. List will be sorted by this element. AZSectionList.desc:Variation on AZList that classifies sections rather that documents. Entries are sorted by section-level metadata. BasClas.bad_general_option:The %s classifier uses an incorrect option. Check your collect.cfg configuration file. BasClas.builddir:Where to put the built indexes. BasClas.buttonname:The label for the classifier screen and button in navigation bar. The default is the metadata element specified with -metadata. BasClas.desc:Base class for all the classifiers. BasClas.no_metadata_formatting:Don't do any automatic metadata formatting (for sorting.) BasClas.outhandle:The file handle to write output to. BasClas.removeprefix:A prefix to ignore in metadata values when sorting. BasClas.removesuffix:A suffix to ignore in metadata values when sorting. BasClas.verbosity:Controls the quantity of output. 0=none, 3=lots. Browse.desc:A fake classifier that provides a link in the navigation bar to a prototype combined browsing and searching page. Only works for mgpp collections, and is only practical for small collections. DateList.bymonth:Classify by year and month instead of only year. DateList.desc:Classifier plugin for sorting by date. By default, sorts by 'Date' metadata. Dates are assumed to be in the form yyyymmdd or yyyy-mm-dd. DateList.metadata:The metadata that contains the dates to classify by. The format is expected to be yyyymmdd or yyyy-mm-dd. Can be a comma separated list, in which case the first date found will be used. DateList.reverse_sort:Sort the documents in reverse chronological order (newest first). DateList.nogroup:Make each year an individual entry in the horizontal list, instead of spanning years with few entries. (This can also be used with the -bymonth option to make each month a separate entry instead of merging). DateList.no_special_formatting:Don't display Year and Month information in the document list. DateList.sort:An extra metadata field to sort by in the case where two documents have the same date. GenericList.always_bookshelf_last_level:Create a bookshelf icon even if there is only one item in each group at the leaf nodes. GenericList.classify_sections:Classify sections instead of documents. GenericList.desc:A general and flexible list classifier with most of the abilities of AZCompactList, but with better Unicode, metadata and sorting capabilities. GenericList.metadata:Metadata fields used for classification. Use '/' to separate the levels in the hierarchy and ';' to separate metadata fields within each level. GenericList.partition_name_length:The length of the partition name; defaults to a variable length from 1 up to 3 characters, depending on how many are required to distinguish the partition start from its end. This option only applies when partition_type_within_level is set to 'constant_size'. GenericList.partition_size_within_level:The number of items in each partition (only applies when partition_type_within_level is set to 'constant_size'). GenericList.partition_type_within_level:The type of partitioning done: either 'per_letter', 'constant_size', or 'none'. GenericList.sort_leaf_nodes_using:Metadata fields used for sorting the leaf nodes. Use '|' to separate the metadata groups to stable sort and ';' to separate metadata fields within each group. GenericList.use_hlist_for:Metadata fields to use a hlist rather than a vlist. Use ',' to separate the metadata groups and ';' to separate the metadata fields within each group. HFileHierarchy.desc:Classifier plugin for generating hierarchical classifications based on a supplementary structure file. Hierarchy.desc:Classifier plugin for generating a hierarchical classification. This may be based on structured metadata, or may use a supplementary structure file (use the -hfile option). Hierarchy.documents_last:Display document nodes after classifier nodes. Hierarchy.hfile:Use the specified classification structure file. Hierarchy.hlist_at_top:Display the first level of the classification horizontally. Hierarchy.reverse_sort:Sort leaf nodes in reverse order (use with -sort). Hierarchy.separator:Regular expression used for the separator, if using structured metadata. Hierarchy.sort:Metadata field to sort leaf nodes by. Leaves will not be sorted if not specified. Hierarchy.suppressfirstlevel:Ignore the first part of the metadata value. This is useful for metadata where the first element is common, such as the import directory in gsdlsourcefilename. Hierarchy.suppresslastlevel:Ignore the final part of the metadata value. This is useful for metadata where each value is unique, such as file paths. HTML.desc:Creates an empty classification that's simply a link to a web page. HTML.url:The url of the web page to link to. List.desc:Simple list classifier plugin. List.metadata:A single Metadata field or a comma separated list of Metadata fields used for classification. Following the order indicated by the list, the first field that contains a Metadata value will be used. List will be sorted by this element, unless -sort is used. If no metadata is specified, then all documents will be included in the list, otherwise only documents that contain a metadata value will be included. List.sort:Metadata field to sort by. Use '-sort nosort' for no sorting. Phind.desc:Produces a hierarchy of phrases found in the text, which is browsable via an applet. Phind.language:Language or languages to use building hierarchy. Languages are identified by two-letter country codes like en (English), es (Spanish), and fr (French). Language is a regular expression, so 'en|fr' (English or French) and '..' (match any language) are valid. Phind.min_occurs:The minimum number of times a phrase must appear in the text to be included in the phrase hierarchy. Phind.savephrases:If set, the phrase infomation will be stored in the given file as text. It is probably a good idea to use an absolute path. Phind.suffixmode:The smode parameter to the phrase extraction program. A value of 0 means that stopwords are ignored, and of 1 means that stopwords are used. Phind.text:The text used to build the phrase hierarchy. Phind.thesaurus:Name of a thesaurus stored in Phind format in the collection's etc directory. Phind.title:The metadata field used to describe each document. Phind.untidy:Don't remove working files. RecentDocumentsList.desc:Classifier that gives a list of newly added or modified documents. RecentDocumentsList.include_docs_added_since:Include only documents modified or added after the specified date (in yyyymmdd or yyyy-mm-dd format). RecentDocumentsList.include_most_recently_added:Include only the specified number of most recently added documents. Only used if include_docs_added_since is not specified. RecentDocumentsList.sort:Metadata to sort List by. If not specified, list will be sorted by date of modification/addition. SectionList.desc:Same as List classifier but includes all sections of document (excluding top level) rather than just top level document itself. Collage.desc:An applet is used to display a collage of images found in the collection. Collage.geometry:The dimensions of the Collage canvas. For a canvas 600 pixels wide by 400 pixels high, for example, specify geometry as 600x400 Collage.maxDepth:Images for collaging are drawn from mirroring the underlying browse classifier. This controls the maximum depth of the mirroring process. Collage.maxDisplay:The maximum number of images to show in the collage at any one time. Collage.imageType:Used to control, by expressing file name extensions, which file types are used in the collage. A list of file name extensions is separated by the percent (%%) symbol. Collage.bgcolor:The background color of the collage canvas, specified in hexadecimal form (for example #008000 results in a forest green background). Collage.buttonname:The label for the classifier screen and button in navigation bar. Collage.refreshDelay:Rate, in milliseconds, that the collage canvas is refreshed. Collage.isJava2:Used to control which run-time classes of Java are used. More advanced version of Java (i.e. Java 1.2 onwards) include more sophisticated support for controlling transparency in images, this flag helps control what happens, however the built-in Java runtime for some browsers is version 1.1. The applet is designed to, by default, auto-detect which version of Java the browser is running and act accordingly. Collage.imageMustNotHave:Used to suppress images that should not appear in the collage, such as image buttons that make up the navigation bar. Collage.caption:Optional captions to display below the collage canvas. # # Plugin option descriptions # ArcPlug.desc:Plugin which recurses through an archives.inf file (i.e. the file generated in the archives directory when an import is done), processing each file it finds. BasPlug.adding:adding BasPlug.already_seen:already seen BasPlug.bad_general_option:The %s plugin uses an incorrect option. Check your collect.cfg configuration file. BasPlug.block_exp:Files matching this regular expression will be blocked from being passed to any later plugins in the list. This has no real effect other than to prevent lots of warning messages about input files you don't care about. Each plugin might have a default block_exp. e.g. by default HTMLPlug blocks any files with .gif, .jpg, .jpeg, .png or .css file extensions. BasPlug.associate_ext:Causes files with the same root filename as the document being processed by the plugin AND a filename extension from the comma separated list provided by this argument to be associated with the document being processed rather than handled as a separate list. BasPlug.could_not_extract_encoding:WARNING: encoding could not be extracted from %s - defaulting to %s BasPlug.could_not_extract_language:WARNING: language could not be extracted from %s - defaulting to %s BasPlug.could_not_open_for_reading:could not open %s for reading BasPlug.no_cover_image:Do not look for a prefix.jpg file (where prefix is the same prefix as the file being processed) and associate it as a cover image. BasPlug.default_encoding:Use this encoding if -input_encoding is set to 'auto' and the text categorization algorithm fails to extract the encoding or extracts an encoding unsupported by Greenstone. This option can take the same values as -input_encoding. BasPlug.default_language:If Greenstone fails to work out what language a document is the 'Language' metadata element will be set to this value. The default is 'en' (ISO 639 language symbols are used: en = English). Note that if -input_encoding is not set to 'auto' and -extract_language is not set, all documents will have their 'Language' metadata set to this value. BasPlug.desc:Base class for all the import plugins. BasPlug.done_acronym_extract:done extracting acronyms. BasPlug.done_acronym_markup:done acronym markup. BasPlug.done_email_extract:done extracting e-mail addresses. BasPlug.dummy_text:This document has no text. BasPlug.empty_file:file contains no text BasPlug.extract_acronyms:Extract acronyms from within text and set as metadata. BasPlug.extract_email:Extract email addresses as metadata. BasPlug.extract_historical_years:Extract time-period information from historical documents. This is stored as metadata with the document. There is a search interface for this metadata, which you can include in your collection by adding the statement, "format QueryInterface DateSearch" to your collection configuration file. BasPlug.extract_language:Identify the language of each document and set 'Language' metadata. Note that this will be done automatically if -input_encoding is 'auto'. BasPlug.extracting:extracting BasPlug.extracting_acronyms:extracting acronyms BasPlug.extract_keyphrases:Extract keyphrases automatically with Kea (default settings). BasPlug.extract_keyphrases_kea4:Extract keyphrases automatically with Kea 4.0 (default settings). Kea 4.0 is a new version of Kea that has been developed for controlled indexing of documents in the domain of agriculture. BasPlug.extract_keyphrase_options:Options for keyphrase extraction with Kea. For example: mALIWEB - use ALIWEB extraction model; n5 - extract 5 keyphrase;, eGBK - use GBK encoding. BasPlug.extracting_emails:extracting e-mail addresses BasPlug.file_has_no_text:ERROR: %s contains no text BasPlug.first:Comma separated list of first sizes to extract from the text into a metadata field. The field is called 'FirstNNN'. BasPlug.input_encoding:The encoding of the source documents. Documents will be converted from these encodings and stored internally as utf8. BasPlug.input_encoding.ascii:Plain 7 bit ascii. This may be a bit faster than using iso_8859_1. Beware of using this on a collection of documents that may contain characters outside the plain 7 bit ascii set though (e.g. German or French documents containing accents), use iso_8859_1 instead. BasPlug.input_encoding.auto:Use text categorization algorithm to automatically identify the encoding of each source document. This will be slower than explicitly setting the encoding but will work where more than one encoding is used within the same collection. BasPlug.input_encoding.unicode:Just unicode. BasPlug.input_encoding.utf8:Either utf8 or unicode -- automatically detected. BasPlug.keyphrases:keyphrases BasPlug.marking_up_acronyms:marking up acronyms BasPlug.markup_acronyms:Add acronym metadata into document text. BasPlug.maximum_century:The maximum named century to be extracted as historical metadata (e.g. 14 will extract all references up to the 14th century). BasPlug.maximum_year:The maximum historical date to be used as metadata (in a Common Era date, such as 1950). BasPlug.missing_kea:Error: The Kea software could not be found at %s. Please download Kea %s from http://www.nzdl.org/Kea and install it in this directory. BasPlug.must_be_implemented:BasPlug::read function must be implemented in sub-class for recursive plugins BasPlug.no_bibliography:Do not try to block bibliographic dates when extracting historical dates. BasPlug.process_exp:A perl regular expression to match against filenames. Matching filenames will be processed by this plugin. For example, using '(?i).html?\$' matches all documents ending in .htm or .html (case-insensitive). BasPlug.read_denied:Read permission denied for %s BasPlug.separate_cjk:Insert spaces between Chinese/Japanese/Korean characters to make each character a word. Use if text is not segmented. BasPlug.smart_block:Block files in a smarter way than just looking at filenames. BasPlug.stems:stems BasPlug.unsupported_encoding:WARNING: %s appears to be encoded in an unsupported encoding (%s) - using %s BasPlug.wrong_encoding:WARNING: %s was read using %s encoding but appears to be encoded as %s. BibTexPlug.desc:BibTexPlug reads bibliography files in BibTex format. BibTexPlug creates a document object for every reference in the file. It is a subclass of SplitPlug, so if there are multiple records, all are read. BookPlug.desc:Creates multi-level document from document containing <> level tags. Metadata for each section is taken from any other tags on the same line as the <>. e.g. <>xxxx<> sets Title metadata. Everything else between TOC tags is treated as simple html (i.e. no processing of html links or any other HTMLPlug type stuff is done). Expects input files to have a .hb file extension by default (this can be changed by adding a -process_exp option a file with the same name as the hb file but a .jpg extension is taken as the cover image (jpg files are blocked by this plugin). BookPlug is a simplification (and extension) of the HBPlug used by the Humanity Library collections. BookPlug is faster as it expects the input files to be cleaner (The input to the HDL collections contains lots of excess html tags around <> tags, uses <> tags to specify images, and simply takes all text between <> tags and start of text to be Title metadata). If you're marking up documents to be displayed in the same way as the HDL collections, use this plugin instead of HBPlug. ConvertToPlug.apply_fribidi:Run the "fribidi" Unicode Bidirectional Algorithm program over the converted file (for right-to-left text). ConvertToPlug.convert_to:Plugin converts to TEXT or HTML or various types of Image (e.g. JPEG, GIF, PNG). ConvertToPlug.convert_to.auto:Automatically select the format converted too. Format chosen depends on input document type, for example Word will automatically be converted to HTML, whereas PowerPoint will be converted to Greenstone's PagedImage format. ConvertToPlug.convert_to.html:HTML format. ConvertToPlug.convert_to.text:Plain text format. ConvertToPlug.convert_to.pagedimg_jpg:JPEG format. ConvertToPlug.convert_to.pagedimg_gif:GIF format. ConvertToPlug.convert_to.pagedimg_png:PNG format. ConvertToPlug.desc:This plugin is inherited by such plugins as WordPlug, PPTPlug, PSPlug, RTFPlug and PDFPlug. It facilitates the conversion of these document types to either HTML, TEXT or a series of images. It works by dynamically loading an appropriate secondary plugin (HTMLPlug, StructuredHTMLPlug, PagedImgPlug or TEXTPlug) based on the plugin argument 'convert_to'. ConvertToPlug.keep_original_filename:Keep the original filename for the associated file, rather than converting to doc.pdf, doc.doc etc. ConvertToPlug.use_strings:If set, a simple strings function will be called to extract text if the conversion utility fails. ConvertToRogPlug.desc:A plugin that inherits from RogPlug. CSVPlug.desc:A plugin for files in comma-separated value format. A new document will be created for each line of the file. DBPlug.desc:A plugin that imports records from a database. This uses perl's DBI module, which includes back-ends for mysql, postgresql, comma separated values (CSV), MS Excel, ODBC, sybase, etc... Extra modules may need to be installed to use this. See /etc/packages/example.dbi for an example config file. DBPlug.title_sub:Substitution expression to modify string stored as Title. Used by, for example, PSPlug to remove "Page 1" etc from text used as the title. DSpacePlug.desc:DSpacePlug takes a collection of documents exported from DSpace and imports them into Greenstone. DSpacePlug.first_inorder_ext: This is used to identify the primary stream of DSpace collection document. With this option, the system will treat the defined ext types of document in sequence to look for the possible primary stream. DSpacePlug.first_inorder_mime:This is used to identify the primary data stream of DSpace collection document.With this option, the system will treat the defined mime types of document in sequence to look for the possible primary stream. DSpacePlug.only_first_doc:This is used to identify the primary data stream of DSpace collection document.With this option, the system will treat the first document in dublic_core metadata file as the possible primary stream. EMAILPlug.desc:Email plug reads email files. These are named with a simple number (i.e. as they appear in maildir folders) or with the extension .mbx (for mbox mail file format).\nDocument text: The document text consists of all the text after the first blank line in the document.\nMetadata (not Dublin Core!):\n\t\$Headers All the header content (optional, not stored by default)\n\t\$Subject Subject: header\n\t\$To To: header\n\t\$From From: header\n\t\$FromName Name of sender (where available)\n\t\$FromAddr E-mail address of sender\n\t\$DateText Date: header\n\t\$Date Date: header in GSDL format (eg: 19990924) EMAILPlug.no_attachments:Do not save message attachments. EMAILPlug.headers:Store email headers as "Headers" metadata. EMAILPlug.split_exp:A perl regular expression used to split files containing many messages into individual documents. ExcelPlug.desc:A plugin for importing Microsoft Excel files (versions 95 and 97). FOXPlug.desc:Plugin to process a Foxbase dbt file. This plugin provides the basic functionality to read in the dbt and dbf files and process each record. This general plugin should be overridden for a particular database to process the appropriate fields in the file. GAPlug.desc:Processes Greenstone Archive XML documents. Note that this plugin does no syntax checking (though the XML::Parser module tests for well-formedness). It's assumed that the Greenstone Archive files conform to their DTD. GISBasPlug.extract_placenames:Extract placenames from within text and set as metadata. Requires GIS extension to Greenstone. GISBasPlug.gazetteer:Gazetteer to use to extract placenames from within text and set as metadata. Requires GIS extension to Greenstone. GISBasPlug.place_list:When extracting placements, include list of placenames at start of the document. Requires GIS extension to Greenstone. GMLPlug.desc:Plugin which processes a GML format document. It assumes that gml tags are all in lower-case. HBPlug.desc:Plugin which processes an HTML book directory. This plugin is used by the Humanity Library collections and does not handle input encodings other than ascii or extended ascii. This code is kind of ugly and could no doubt be made to run faster, by leaving it in this state I hope to encourage people to make their collections use HBSPlug instead ;-)\n\nUse HBSPlug if creating a new collection and marking up files like the Humanity Library collections. HBSPlug accepts all input encodings but expects the marked up files to be cleaner than those used by the Humanity Library collections HTMLPlug.assoc_files:Perl regular expression of file extensions to associate with html documents. HTMLPlug.desc:This plugin processes HTML files HTMLPlug.description_tags:Split document into sub-sections where
tags occur. '-keep_head' will have no effect when this option is set. HTMLPlug.extract_style:Extract style and script information from the HTML tag and save as DocumentHeader metadata. This will be set in the document page as the _document:documentheader_ macro. HTMLPlug.file_is_url:Set if input filenames make up url of original source documents e.g. if a web mirroring tool was used to create the import directory structure. HTMLPlug.hunt_creator_metadata:Find as much metadata as possible on authorship and place it in the 'Creator' field. HTMLPlug.keep_head:Don't remove headers from html files. HTMLPlug.metadata_fields:Comma separated list of metadata fields to attempt to extract. Use 'tag' to have the contents of the first pair put in a metadata element called 'tagname'. Capitalise this as you want the metadata capitalised in Greenstone, since the tag extraction is case insensitive. HTMLPlug.no_metadata:Don't attempt to extract any metadata from files. HTMLPlug.no_strip_metadata_html:Comma separated list of metadata names, or 'all'. Used with -description_tags, it prevents stripping of HTML tags from the values for the specified metadata. HTMLPlug.nolinks:Don't make any attempt to trap links (setting this flag may improve speed of building/importing but any relative links within documents will be broken). HTMLPlug.rename_assoc_files:Renames files associated with documents (e.g. images). Also creates much shallower directory structure (useful when creating collections to go on cd-rom). HTMLPlug.sectionalise_using_h_tags:Automatically create a sectioned document using h1, h2, ... hX tags. HTMLPlug.title_sub:Substitution expression to modify string stored as Title. Used by, for example, PDFPlug to remove "Page 1", etc from text used as the title. HTMLPlug.tidy_html:If set, converts a HTML document to a well-formed XHTML. It enable users to view the document in the book format. HTMLPlug.old_style_HDL:To mark whether the file in this collection is sectionalized using the old HDL's section style. ImagePlug.converttotype:Convert main image to format 's'. ImagePlug.desc:This plugin processes images, adding basic metadata ImagePlug.minimumsize:Ignore images smaller than n bytes. ImagePlug.noscaleup:Don't scale up small images when making thumbnails. ImagePlug.screenviewsize:If set, makes an image of size n for screen display and sets Screen, ScreenSize, ScreenWidth and ScreenHeight metadata. By default it is not set. ImagePlug.screenviewtype:If -screenviewsize is set, this sets the screen display image type. ImagePlug.thumbnailsize:Make thumbnails of size nxn. ImagePlug.thumbnailtype:Make thumbnails in format 's'. IndexPlug.desc:This recursive plugin processes an index.txt file. The index.txt file should contain the list of files to be included in the collection followed by any extra metadata to be associated with each file.\n\nThe index.txt file should be formatted as follows: The first line may be a key (beginning with key:) to name the metadata fields (e.g. key: Subject Organization Date). The following lines will contain a filename followed by the value that metadata entry is to be set to. (e.g. 'irma/iw097e 3.2 unesco 1993' will associate the metadata Subject=3.2, Organization=unesco, and Date=1993 with the file irma/iw097e if the above key line was used)\n\nNote that if any of the metadata fields use the Hierarchy classifier plugin then the value they're set to should correspond to the first field (the descriptor) in the appropriate classification file.\n\nMetadata values may be named separately using a tag (e.g. 3.2) and this will override any name given to them by the key line. If there's no key line any unnamed metadata value will be named 'Subject'. ISISPlug.desc:This plugin processes CDS/ISIS databases. For each CDS/ISIS database processed, three files must exist in the collection's import folder: the Master file (.mst), the Field Definition Table (.fdt), and the Cross-Reference File (.xrf). ISISPlug.subfield_separator:The string used to separate subfields in CDS/ISIS database records. ISISPlug.entry_separator:The string used to separate multiple values for single metadata fields in CDS/ISIS database records. LaTeXPlug.desc:Plugin for LaTeX documents. LOMPlug.desc:Plugin for importing LOM (Learning Object Metadata) files. LOMPlug.root_tag:The DocType of the XML file (or a regular expression that matches the root element). LOMPlug.check_timestamp:Check timestamps of previously downloaded files, and only download again if source file is newer. LOMPlug.download_srcdocs:Download the source document if one is specified (in general^identifier^entry or technical^location). This option should specify a regular expression to match filenames against before downloading. Note, this currently doesn't work for documents outside a firewall. MARCPlug.desc:Basic MARC plugin. MARCPlug.metadata_mapping:Name of file that includes mapping details from MARC values to Greenstone metadata names. Defaults to 'marctodc.txt' found in the site's etc directory. MARCXMLPlug.desc:MARCXML plugin. MARCXMLPlug.metadata_mapping_file:Name of file that includes mapping details from MARC values to Greenstone metadata names. Defaults to 'marctodc.txt' found in the site's etc directory. MediaWikiPlug.desc:Plugin for importing MediaWiki web pages MediaWikiPlug.show_toc: Add to the collection's About page the 'table of contents' on the MediaWiki website's main page. Needs to specify a Perl regular expression in toc_exp below to match the 'table of contents' section. MediaWikiPlug.delete_toc:Delete the 'table of contents' section on each HTML page. Needs to specify a Perl regular expression in toc_exp below to match the 'table of contents' section. MediaWikiPlug.toc_exp:A Perl regular expression to match the 'table of content'. The default value matches common MediaWiki web pages. MediaWikiPlug.delete_nav:Delete the navigation section. Needs to specify a Perl regular expression in nav_div_exp below. MediaWikiPlug.nav_div_exp:A Perl regular expression to match the navigation section. The default value matches common MediaWiki web pages. MediaWikiPlug.delete_searchbox:Delete the searchbox section. Needs to specify a Perl regular expression in searchbox_div_exp below. MediaWikiPlug.searchbox_div_id:A Perl regular expression to match the searchbox section. The default value matches common MediaWiki web pages. MediaWikiPlug.remove_title_suffix_exp:A Perl regular expression to trim the extracted title. For example, \\s-(.+) will trim title contents after "-". MetadataCSVPlug.desc:A plugin for metadata in comma-separated value format. The Filename field in the CSV file is used to determine which document the metadata belongs to. MetadataPass.desc:On-the-side base class to BasPlug that supports metadata plugins utilise metadata_read pass of import.pl METSPlug.desc:Process Greenstone-style METS documents GISBasPlug.desc: On-the-side base class to BasPlug that supports GIS capabilities NULPlug.desc:Dummy (.nul) file plugin. Used with the files produced by exploding metadata database files. NULPlug.assoc_field:Name of a metadata field that will be set for each nul file. NULPlug.add_metadata_as_text:Add a table of metadata as the text of the document, rather than "This document has no text". NULPlug.remove_namespace_for_text:Remove namepsaces from metadata names in the document text (if add_metadata_as_text is set). OAIPlug.desc:Basic Open Archives Initiative (OAI) plugin. OggVorbisPlug.add_technical_metadata:Add technical (eg. bitrate) metadata. OggVorbisPlug.desc:A plugin for importing Ogg Vorbis audio files. OpenDocumentPlug.desc:Plugin for OASIS OpenDocument format documents (used by OpenOffice 2.0) PagedImgPlug.desc:Plugin for documents made up of a sequence of images, with optional OCR text for each image. This plugin processes .item files which list the sequence of image and text files, and provide metadata. PagedImgPlug.documenttype:Set the document type (used for display) PagedImgPlug.documenttype.paged:Paged documents have next and previous arrows and a 'go to page X' box PagedImgPlug.documenttype.hierarchy:Hierarchical documents have a table of contents PagedImgPlug.headerpage:Add a top level header page (that contains no image) to each document. PagedImgPlug.screenview:Produce a screenview image for each image, and set Screen, ScreenSize, ScreenWidth and ScreenHeight metadata. PagedImgPlug.screenviewsize:Make screenview images of size nxn. PagedImgPlug.screenviewtype:Make screenview images in format 's'. PagedImgPlug.thumbnail:Produce a thumbnail for each image PDFPlug.allowimagesonly:Allow PDF files with no extractable text. Avoids the need to have -complex set. Only useful with convert_to html. PDFPlug.complex:Create more complex output. With this option set the output html will look much more like the original PDF file. For this to function properly you Ghostscript installed (for *nix gs should be on your path while for windows you must have gswin32c.exe on your path). PDFPlug.desc:Plugin that processes PDF documents. PDFPlug.nohidden:Prevent pdftohtml from attempting to extract hidden text. This is only useful if the -complex option is also set. PDFPlug.noimages:Don't attempt to extract images from PDF. PDFPlug.use_sections:Create a separate section for each page of the PDF file. PDFPlug.zoom:The factor by which to zoom the PDF for output (this is only useful if -complex is set). PPTPlug.desc:A plugin for importing Microsoft PowerPoint files. PPTPlug.windows_scripting:Use MicroSoft Windows scripting technology (Visual Basic for Applications) to get PPT to convert document to various image types (e.g. JPEG,PNG,GIF) rather than rely on the open source package ppttohtml. ProCitePlug.desc:A plugin for (exported) ProCite databases PSPlug.desc:This is a \"poor man's\" ps to text converter. If you are serious, consider using the PRESCRIPT package, which is available for download at http://www.nzdl.org/html/software.html PSPlug.extract_date:Extract date from PS header. PSPlug.extract_pages:Extract pages from PS header. PSPlug.extract_title:Extract title from PS header. RealMediaPlug.desc:A plugin for processing Real Media files. RecPlug.desc:RecPlug is a plugin which recurses through directories processing each file it finds. RecPlug.recheck_directories:After the files in an import directory have been processed, re-read the directory to discover any new files created. RecPlug.use_metadata_files:(DEPRECATED - Add MetadataXMLPlug to the list of plugins instead) Read metadata from metadata XML files. ReferPlug.desc:ReferPlug reads bibliography files in Refer format. ReferPlug.longdesc:ReferPlug reads bibliography files in Refer format.\nBy Gordon W. Paynter (gwp\@cs.waikato.ac.nz), November 2000\n\nLoosely based on hcibib2Plug by Steve Jones (stevej\@cs.waikato.ac.nz). Which was based on EMAILPlug by Gordon Paynter (gwp\@cs.waikato.ac.nz). Which was based on old versions of HTMLplug and HCIBIBPlugby by Stefan Boddie and others -- it's hard to tell what came from where, now.\n\nReferPlug creates a document object for every reference in the file. It is a subclass of SplitPlug, so if there are multiple records, all are read.\n\nDocument text:\n\tThe document text consists of the reference in Refer format.\nMetadata:\n\t\$Creator \%A Author name\n\t\$Title \%T Title of article of book\n\t\$Journal \%J Title of Journal\n\t\$Booktitle \%B Title of book containing the publication\n\t\$Report \%R Type of Report, paper or thesis\n\t\$Volume \%V Volume Number of Journal\n\t\$Number \%N Number of Journal within Volume\n\t\$Editor \%E Editor name\n\t\$Pages \%P Page Number of article\n\t\$Publisher \%I Name of Publisher\n\t\$Publisheraddr \%C Publisher's address\n\t\$Date \%D Date of publication\n\t\$Keywords \%K Keywords associated with publication\n\t\$Abstract \%X Abstract of publication\n\t\$Copyright\t\%* Copyright information for the article RogPlug.desc:Creates simple single-level documents from .rog or .mdb files. RTFPlug.desc:Plugin for importing Rich Text Format files. SRCPlug.desc:Filename is currently used for Title ( optionally minus some prefix ). Current languages:\ntext: READMEs/Makefiles\nC/C++ (currently extracts #include statements and C++ class decls)\nPerl (currently only done as text)\nShell (currently only done as text) SRCPlug.remove_prefix:Remove this leading pattern from the filename (eg -remove_prefix /tmp/XX/src/). The default is to remove the whole path from the filename. SplitPlug.desc:SplitPlug is a plugin for splitting input files into segments that will then be individually processed. This plugin should not be called directly. Instead, if you need to process input files that contain several documents, you should write a plugin with a process function that will handle one of those documents and have it inherit from SplitPlug. See ReferPlug for an example. SplitPlug.split_exp:A perl regular expression to split input files into segments. StructuredHTMLPlug.desc:A plugin to process structured HTML documents, splitting them into sections based on style information. StructuredHTMLPlug.delete_toc:Remove any table of contents, list of figures etc from the converted HTML file. Styles for these are specified by the toc_header option. StructuredHTMLPlug.title_header:possible user-defined styles for the title header. StructuredHTMLPlug.level1_header:possible user-defined styles for the level1 header in the HTML document (equivalent to

). StructuredHTMLPlug.level2_header:possible user-defined styles for the level2 header in the HTML document (equivalent to

). StructuredHTMLPlug.level3_header:possible user-defined styles for the level3 header in the HTML document (equivalent to

). StructuredHTMLPlug.toc_header:possible user-defined header styles for the table of contents, table of figures etc, to be removed if delete_toc is set. TEXTPlug.desc:Creates simple single-level document. Adds Title metadata of first line of text (up to 100 characters long). TEXTPlug.title_sub:Substitution expression to modify string stored as Title. Used by, for example, PSPlug to remove "Page 1" etc from text used as the title. UnknownPlug.assoc_field:Name of the metadata field that will hold the associated file's name. UnknownPlug.desc:This is a simple Plugin for importing files in formats that Greenstone doesn't know anything about. A fictional document will be created for every such file, and the file itself will be passed to Greenstone as the \"associated file\" of the document. UnknownPlug.file_format:Type of the file (e.g. MPEG, MIDI, ...) UnknownPlug.mime_type:Mime type of the file (e.g. image/gif). UnknownPlug.process_extension:Process files with this file extension. This option is an alternative to process_exp that is simpler to use but less flexible. UnknownPlug.srcicon:Specify a macro name (without underscores) to use as srcicon metadata. MP3Plug.desc:Plugin for processing MP3 files. MP3Plug.assoc_images:Use Google image search to locate images related to MP3 file based on ID3 Title and Artist metadata. MP3Plug.applet_metadata:Used to store [applet] metadata for each document that contains the necessary HTML for an MP3 audio player applet to play that file. MP3Plug.metadata_fields:Comma separated list of metadata fields to extract (assuming present) in an MP3 file. Use \"*\" to extract all the fields. W3ImgPlug.aggressiveness:Range of related text extraction techniques to use. W3ImgPlug.aggressiveness.1:Filename, path, ALT text only. W3ImgPlug.aggressiveness.2:All of 1, plus caption where available. W3ImgPlug.aggressiveness.3:All of 2, plus near paragraphs where available. W3ImgPlug.aggressiveness.4:All of 3, plus previous headers (

,

...) where available. W3ImgPlug.aggressiveness.5:All of 4, plus textual references where available. W3ImgPlug.aggressiveness.6:All of 4, plus page metatags (title, keywords, etc). W3ImgPlug.aggressiveness.7:All of 6, 5 and 4 combined. W3ImgPlug.aggressiveness.8:All of 7, plus repeat caption, filename, etc (raise ranking of more relevant results). W3ImgPlug.aggressiveness.9:All of 1, plus full text of source page. W3ImgPlug.caption_length:Maximum length of captions (in characters). W3ImgPlug.convert_params:Additional parameters for ImageMagicK convert on thumbnail creation. For example, '-raise' will give a three dimensional effect to thumbnail images. W3ImgPlug.desc:A plugin for extracting images and associated text from webpages. W3ImgPlug.document_text:Add image text as document:text (otherwise IndexedText metadata field). W3ImgPlug.index_pages:Index the pages along with the images. Otherwise reference the pages at the source URL. W3ImgPlug.max_near_text:Maximum characters near images to extract. W3ImgPlug.min_height:Pixels. Skip images shorter than this. W3ImgPlug.min_near_text:Minimum characters of near text or caption to extract. W3ImgPlug.min_size:Bytes. Skip images smaller than this. W3ImgPlug.min_width:Pixels. Skip images narrower than this. W3ImgPlug.neartext_length:Target length of near text (in characters). W3ImgPlug.no_cache_images:Don't cache images (point to URL of original). W3ImgPlug.smallpage_threshold:Images on pages smaller than this (bytes) will have the page (title, keywords, etc) meta-data added. W3ImgPlug.textrefs_threshold:Threshold for textual references. Lower values mean the algorithm is less strict. W3ImgPlug.thumb_size:Max thumbnail size. Both width and height. WordPlug.desc:A plugin for importing Microsoft Word documents. WordPlug.windows_scripting:Use MicroSoft Windows scripting technology (Visual Basic for Applications) to get Word to convert document to HTML rather than rely on the open source package WvWare. Causes Word application to open on screen if not already running. WordPlug.metadata_fields: This is to retrieve metadata from the HTML document converted by VB scripting. It allows users to define comma separated list of metadata fields to attempt to extract. Use 'tag' to have the contents of the first pair put in a metadata element called 'tagname'. Capitalise this as you want the metadata capitalised in Greenstone, since the tag extraction is case insensitive XMLPlug.desc:Base class for XML plugins. XMLPlug.xslt:Transform a matching input document with the XSLT in the named file. A relative filename is assumed to be in the collection's file area, for instance etc/mods2dc.xsl. ZIPPlug.desc:Plugin which handles compressed and/or archived input formats currently handled formats and file extensions are:\ngzip (.gz, .z, .tgz, .taz)\nbzip (.bz)\nbzip2 (.bz2)\nzip (.zip .jar)\ntar (.tar)\n\nThis plugin relies on the following utilities being present (if trying to process the corresponding formats):\ngunzip (for gzip)\nbunzip (for bzip)\nbunzip2 \nunzip (for zip)\ntar (for tar) # # Download module option descriptions # BasDownload.desc:Base class for Download modules MediaWikiDownload.desc:A module for downloading from MediaWiki websites MediaWikiDownload.reject_filetype:Ignore url list, separate by comma, e.g.*cgi-bin*,*.ppt ignores hyperlinks that contain either 'cgi-bin' or '.ppt' MediaWikiDownload.reject_filetype_disp:Ignore url list, separate by comma MediaWikiDownload.exclude_directories:List of exclude directories (must be absolute path to the directory), e.g. /people,/documentation will exclude the 'people' and 'documentation' subdirectory under the currently crawling site. MediaWikiDownload.exclude_directories_disp:List of exclude directories, separate by comma OAIDownload.desc:A module for downloading from OAI repositories OAIDownload.url_disp:Source URL OAIDownload.url:OAI repository URL OAIDownload.set_disp:Set OAIDownload.set:Restrict the download to the specified set in the repository OAIDownload.get_doc_disp:Get document OAIDownload.get_doc:Download the source document if one is specified in the record OAIDownload.max_records_disp:Max records OAIDownload.max_records:Maximum number of records to download SRWDownload.desc:A module for downloading from SRW (Search/Retrieve Web Service) repositories WebDownload.desc:A module for downloading from the Internet via HTTP or FTP WebDownload.url:Source URL WebDownload.url_disp:Source URL WebDownload.depth:How many hyperlinks deep to go when downloading WebDownload.depth_disp:Download Depth WebDownload.below:Only mirror files below this URL WebDownload.below_disp:Only mirror files below this URL WebDownload.within:Only mirror files within the same site WebDownload.within_disp:Only mirror files within the same site WebDownload.html_only:Download only HTML files, and ignore associated files e.g images and stylesheets WebDownload.html_only_disp:Download only HTML files WgetDownload.desc: Base class that handles calls to wget WgetDownload.proxy_on:Proxy on WgetDownload.proxy_host:Proxy host WgetDownload.proxy_port:Proxy port WgetDownload.user_name:User name WgetDownload.user_password:User password Z3950Download.desc:A module for downloading from Z3950 repositories Z3950Download.host:Host URL Z3950Download.host_disp:Host Z3950Download.port:Port number of the repository Z3950Download.port_disp:Port Z3950Download.database:Database to search for records in Z3950Download.database_disp:Database Z3950Download.find:Retrieve records containing the specified search term Z3950Download.find_disp:Find Z3950Download.max_records:Maximum number of records to download Z3950Download.max_records_disp:Max Records # #Plugout module option descriptions # BasPlugout.desc:Base class for all the plugouts. BasPlugout.bad_general_option:The %s plugout uses an incorrect option. # # Perl module strings # classify.could_not_find_classifier:ERROR: Could not find classifier \"%s\" download.could_not_find_download:ERROR: Could not find download module \"%s\" plugin.could_not_find_plugin:ERROR: Could not find plugin \"%s\" plugin.including_archive:including the contents of 1 ZIP/TAR archive plugin.including_archives:including the contents of %d ZIP/TAR archives plugin.kill_file:Process killed by .kill file plugin.n_considered:%d documents were considered for processing plugin.n_included:%d were processed and included in the collection plugin.n_rejected:%d were rejected plugin.n_unrecognised:%d were unrecognised plugin.no_plugin_could_process:WARNING: No plugin could process %s plugin.no_plugin_could_recognise:WARNING: No plugin could recognise %s plugin.no_plugin_could_process_this_file:no plugin could process this file plugin.no_plugin_could_recognise_this_file:no plugin could recognise this file plugin.one_considered:1 document was considered for processing plugin.one_included:1 was processed and included in the collection plugin.one_rejected:1 was rejected plugin.one_unrecognised:1 was unrecognised plugin.see_faillog:See %s for a list of unrecognised and/or rejected documents PrintUsage.default:Default PrintUsage.required:REQUIRED plugout.could_not_find_plugout:ERROR: Could not find plugout \"%s\"