# # Resource bundle description # Language.code:en Language.name:English OutputEncoding.unix:iso_8859_1 OutputEncoding.windows:iso_8859_1 # # Common output messages and strings # common.cannot_create_file:ERROR: Can't create file %s common.cannot_find_cfg_file:ERROR: Can't find the configuration file %s common.cannot_open:ERROR: Can't open %s common.cannot_open_fail_log:ERROR: Can't open fail log %s common.cannot_open_output_file:ERROR: Can't open output file %s common.cannot_read:ERROR: Can't read %s common.cannot_read_file:ERROR: Can't read file %s common.general_options:general options (for %s) common.must_be_implemented:function must be implemented in sub-class common.options:options common.processing:processing common.specific_options:specific options common.usage:Usage common.info:info common.invalid_options:Invalid arguments: %s common.true:true common.false:false common.deprecated: DEPRECATED # # Script option descriptions and output messages # scripts.language:Language to display option descriptions in (eg. 'en_US' specifies American English). Requires translations of the option descriptions to exist in the perllib/strings_language-code.rb file. scripts.xml:Produces the information in an XML form, without 'pretty' comments but with much more detail. scripts.listall:Lists all items known about. scripts.describeall:Display options for all items known about scripts.both_old_options:WARNING: -removeold was specified with -keepold or -incremental, defaulting to -removeold. Current contents of %s directory will be deleted. scripts.no_old_options:WARNING: None of -removeold, -keepold or -incremental were specified, defaulting to -removeold. Current contents of %s directory will be deleted. scripts.gli:A flag set when running this script from gli, enables output specific for gli. # -- buildcol.pl -- buildcol.archivedir:Where the archives live. buildcol.builddir:Where to put the built indexes. buildcol.cachedir:Collection will be temporarily built here before being copied to the build directory. buildcol.cannot_open_cfg_file:WARNING: Can't open config file for updating: %s buildcol.collectdir:The path of the "collect" directory. buildcol.copying_back_cached_build:Copying back the cached build buildcol.create_images:Attempt to create default images for new collection. This relies on the Gimp being installed along with relevant perl modules to allow scripting from perl. buildcol.debug:Print output to STDOUT. buildcol.desc:PERL script used to build a greenstone collection from archive documents. buildcol.faillog:Fail log filename. This log receives the filenames of any files which fail to be processed. buildcol.incremental_default_builddir:WARNING: The building directory has defaulted to 'building'. If you want to incrementally add to the index directory, please use the "-builddir index" option to buildcol.pl. buildcol.index:Index to build (will build all in config file if not set). buildcol.incremental:Only index documents which have not been previously indexed. Implies -keepold. Relies on the lucene indexer. buildcol.keepold:Will not destroy the current contents of the building directory. buildcol.maxdocs:Maximum number of documents to build. buildcol.maxnumeric:The maximum nuber of digits a 'word' can have in the index dictionary. Large numbers are split into several words for indexing. For example, if maxnumeric is 4, "1342663" will be split into "1342" and "663". buildcol.mode:The parts of the building process to carry out. buildcol.mode.all:Do everything. buildcol.mode.build_index:Just index the text. buildcol.mode.compress_text:Just compress the text. buildcol.mode.infodb:Just build the metadata database. buildcol.no_default_images:Default images will not be generated. buildcol.no_image_script:WARNING: Image making script could not be found: %s buildcol.no_strip_html:Do not strip the html tags from the indexed text (only used for mgpp collections). buildcol.no_text:Don't store compressed text. This option is useful for minimizing the size of the built indexes if you intend always to display the original documents at run time (i.e. you won't be able to retrieve the compressed text version). buildcol.sections_index_document_metadata:Index document level metadata at section level buildcol.sections_index_document_metadata.never:Don't index any document metadata at section level. buildcol.sections_index_document_metadata.always:Add all specified document level metadata even if section level metadata of that name exists. buildcol.sections_index_document_metadata.unless_section_metadata_exists:Only add document level metadata if no section level metadata of that name exists. buildcol.out:Filename or handle to print output status to. buildcol.params:[options] collection-name buildcol.remove_empty_classifications:Hide empty classifiers and classification nodes (those that contain no documents). buildcol.removeold:Will remove the old contents of the building directory. buildcol.unlinked_col_images:Collection images may not be linked correctly. buildcol.unknown_mode:Unknown mode: %s buildcol.updating_archive_cache:Updating archive cache buildcol.verbosity:Controls the quantity of output. 0=none, 3=lots. # -- classinfo.pl -- classinfo.collection:Giving a collection name will make classinfo.pl look in collect/collection-name/perllib/classify first. If the classifier is not found there it will look in the general perllib/classify directory. classinfo.desc:Prints information about a classifier. classinfo.general_options:General options are inherited from parent classes of the classifier. classinfo.info:info classinfo.no_classifier_name:ERROR: You must provide a classifier name. classinfo.option_types:Classifiers may take two types of options classinfo.params:[options] classifier-name classinfo.passing_options:Options may be passed to any classifier by including them in your collect.cfg configuration file. classinfo.specific_options:Specific options are defined within the classifier itself, and are available only to this particular classifier. # -- downloadfrom.pl -- downloadfrom.cache_dir:The location of the cache directory downloadfrom.desc:Downloads files from an external server downloadfrom.download_mode:The type of server to download from downloadfrom.download_mode.Web:HTTP downloadfrom.download_mode.MediaWiki:MediaWiki website downloadfrom.download_mode.OAI: Open Archives Initiative downloadfrom.download_mode.z3950:z3950 server downloadfrom.download_mode.SRW:SearchRetrieve Webservice downloadfrom.incorrect_mode:download_mode parameter was incorrect. downloadfrom.info:Print information about the server, rather than downloading downloadfrom.params:[general options] [specific download options] # -- downloadinfo.pl -- downloadinfo.desc:Prints information about a download module downloadinfo.collection:Giving a collection name will make downloadinfo.pl look in collect/collection-name/perllib/downloaders first. If the module is not found there it will look in the general perllib/downloaders directory. downloadinfo.params:[options] [download-module] downloadinfo.general_options:General options are inherited from parent classes of the download modules. downloadinfo.specific_options:Specific options are defined within the download module itself, and are available only to this particular downloader. downloadinfo.option_types:Download modules may take two types of options # -- explode_metadata_database.pl -- explode.desc:Explode a metadata database explode.collection:The collection name. Some plugins look for auxiliary files in the collection folder. explode.document_field:The metadata element specifying the file name of documents to obtain and include in the collection. explode.document_prefix:A prefix for the document locations (for use with the document_field option). explode.document_suffix:A suffix for the document locations (for use with the document_field option). explode.encoding:Encoding to use when reading in the database file explode.metadata_set:Metadata set (namespace) to export all metadata as explode.plugin: Plugin to use for exploding explode.plugin_options:Options to pass to the plugin before exploding. Option nmaes must start with -. Separate option names and values with space. Cannot be used with -use_collection_plugin_options. explode.use_collection_plugin_options: Read the collection configuration file and use the options for the specified plugin. Requires the -collection option. Cannot be used with -plugin_options. explode.params: [options] filename explode.records_per_folder: The number of records to put in each subfolder. # -- replace_srcdoc_with_html.pl -- srcreplace.desc: Replace source document with the generated HTML file when rebuilding srcreplace.params: [options] filename srcreplace.plugin: Plugin to use for converting the source document # -- exportcol.pl -- exportcol.out:Filename or handle to print output status to. exportcol.cddir:The name of the directory that the CD contents are exported to. exportcol.cdname:The name of the CD-ROM -- this is what will appear in the start menu once the CD-ROM is installed. exportcol.desc:PERL script used to export one or more collections to a Windows CD-ROM. exportcol.noinstall:Create a CD-ROM where the library runs directly off the CD-ROM and nothing is installed on the host computer. exportcol.params:[options] collection-name1 collection-name2 ... exportcol.coll_not_found:Ignoring invalid collection %s: collection not found at %s. exportcol.coll_dirs_not_found:Ignoring invalid collection %s: one of the following directories not found: exportcol.fail:exportcol.pl failed: exportcol.no_valid_colls:No valid collections specified to export. exportcol.couldnt_create_dir:Could not create directory %s. exportcol.couldnt_create_file:Could not create %s. exportcol.instructions:To create a self-installing Windows CD-ROM, write the contents of this folder out to a CD-ROM. exportcol.non_exist_files:One or more of the following necessary files and directories does not exist: exportcol.success:exportcol.pl succeeded: exportcol.output_dir:The exported collections (%s) are in %s. exportcol.export_coll_not_installed:The Export to CD-ROM functionality has not been installed. # -- import.pl -- import.archivedir:Where the converted material ends up. import.manifest:An XML file that details what files are to be imported. Used instead of recursively descending the import folder, typically for incremental building. import.cannot_open_stats_file:WARNING: Couldn't open stats file %s. import.cannot_open_fail_log:ERROR: Couldn't open fail log %s import.cannot_sort:WARNING: import.pl cannot sort documents when groupsize > 1. sortmeta option will be ignored. import.collectdir:The path of the "collect" directory. import.complete:Import complete import.debug:Print imported text to STDOUT (for GreenstoneXML importing) import.desc:PERL script used to import files into a format (GreenstoneXML or GreenstoneMETS) ready for building. import.faillog:Fail log filename. This log receives the filenames of any files which fail to be processed. import.groupsize:Number of import documents to group into one XML file. import.gzip:Use gzip to compress resulting xml documents (don't forget to include ZIPPlugin in your plugin list when building from compressed documents). import.importdir:Where the original material lives. import.incremental:Only import documents which are newer (by timestamp) than the current archives files. Implies -keepold. import.keepold:Will not destroy the current contents of the archives directory. import.maxdocs:Maximum number of documents to import. import.no_import_dir:Error: Import dir (%s) not found. import.no_plugins_loaded:ERROR: No plugins loaded. import.OIDtype:The method to use when generating unique identifiers for each document. import.OIDtype.hash:Hash the contents of the file. Document identifiers will be the same every time the collection is imported. import.OIDtype.incremental:Use a simple document count. Significantly faster than "hash", but does not assign the same identifier to the same document content, and further documents cannot be added to existing archives. import.OIDtype.assigned:Use the metadata value given by the OIDmetadata option (preceded by 'D'); if unspecified, for a particular document a hash is used instead. These identifiers should be unique. import.OIDtype.dirname:Use the parent directory name (preceded by 'J'). There should only be one document per directory, and directory names should be unique. E.g. import/b13as/h15ef/page.html will get an identifier of Jh15ef. import.OIDmetadata:Specifies the metadata element that hold's the document's unique identifier, for use with -OIDtype=assigned. import.saveas:Format that the archive files should be saved as. import.out:Filename or handle to print output status to. import.params:[options] collection-name import.removeold:Will remove the old contents of the archives directory. import.removing_archives:Removing current contents of the archives directory... import.removing_tmpdir:Removing contents of the collection "tmp" directory... import.reversesort:Sort in reverse order. Used with the -sortmeta option. import.site:Site to find collect directory in (for Greenstone 3 installation). import.sortmeta:Sort documents alphabetically by metadata for building. Search results for boolean queries will be displayed in this order. This will be disabled if groupsize > 1. May be a commma separated list to sort by more than one metadata value. import.statsfile:Filename or handle to print import statistics to. import.stats_backup:Will print stats to STDERR instead. import.verbosity:Controls the quantity of output. 0=none, 3=lots. # -- schedule.pl -- schedule.deleted:Scheduled execution deleted for collection schedule.scheduled:Execution script created for collection schedule.cron:Scheduled execution set up for collection schedule.params:[options] schedule.error.email:-email requires -smtp -toaddr and -fromaddr to be specified. schedule.error.importbuild:-import and -build must be specified. schedule.error.colname:Collection using -colname must be specified. schedule.gli:Running from the GLI schedule.frequency:How often to automatically re-build the collection schedule.frequency.hourly:Re-build every hour schedule.frequency.daily:Re-build every day schedule.frequency.weekly:Re-build every week schedule.action:How to set up automatic re-building schedule.action.add:Schedule automatic re-building schedule.action.update:Update existing scheduling schedule.action.delete:Delete existing scheduling schedule.email:Send email notification schedule.schedule:Select to set up scheduled automatic collection re-building schedule.colname:The colletion name for which scheduling will be set up schedule.import:The import command to be scheduled schedule.build:The buildcol command to be scheduled schedule.toaddr:The email address to send scheduled build notifications to schedule.toaddr.default:Specify User's Email in File->Preferences schedule.fromaddr:The sender email address schedule.fromaddr.default:Specify maintainer in main.cfg schedule.smtp:The mail server that sendmail must contact to send email schedule.smtp.default:Specify MailServer in main.cfg schedule.out:Filename or handle to print output status to. # -- export.pl -- export.exportdir:Where the export material ends up. export.cannot_open_stats_file:WARNING: Couldn't open stats file %s. export.cannot_open_fail_log:ERROR: Couldn't open fail log %s export.cannot_sort:WARNING: export.pl cannot sort documents when groupsize > 1. sortmeta option will be ignored. export.collectdir:The path of the "collect" directory. export.complete:Export complete export.debug:Print exported text to STDOUT (for GreenstoneXML exporting) export.desc:PERL script used to export files in a Greenstone collection to another format. export.faillog:Fail log filename. This log receives the filenames of any files which fail to be processed. (Default: collectdir/collname/etc/fail.log) export.groupsize:Number of documents to group into one XML file. export.gzip:Use gzip to compress resulting xml documents (don't forget to include ZIPPlugin in your plugin list when building from compressed documents). export.importdir:Where the original material lives. export.keepold:Will not destroy the current contents of the export directory. export.maxdocs:Maximum number of documents to export. export.listall:List all the saveas formats export.saveas:Format to export documents as. export.saveas.DSpace:DSpace Archive format. export.saveas.GreenstoneMETS:METS format using the Greenstone profile. export.saveas.FedoraMETS:METS format using the Fedora profile. export.saveas.GreenstoneXML:Greenstone XML Archive format export.saveas.MARCXML:MARC XML format (an XML version of MARC 21) export.out:Filename or handle to print output status to. export.params:[options] collection-name export.removeold:Will remove the old contents of the export directory. export.removing_export:Removing current contents of the export directory... export.sortmeta:Sort documents alphabetically by metadata for building. This will be disabled if groupsize > 1. export.statsfile:Filename or handle to print export statistics to. export.stats_backup:Will print stats to STDERR instead. export.verbosity:Controls the quantity of output. 0=none, 3=lots. # -- mkcol.pl -- mkcol.about:The about text for the collection. mkcol.bad_name_cvs:ERROR: No collection can be named CVS as this may interfere with directories created by the CVS versioning system. mkcol.bad_name_svn:ERROR: No collection can be named .svn as this may interfere with directories created by the SVN versioning system. mkcol.bad_name_modelcol:ERROR: No collection can be named modelcol as this is the name of the model collection. mkcol.cannot_find_modelcol:ERROR: Cannot find the model collection %s mkcol.col_already_exists:ERROR: This collection already exists. mkcol.collectdir:Directory where new collection will be created. mkcol.group_not_valid_in_gs3:The group option is not valid in Greenstone 3 mode (-gs3mode). mkcol.creating_col:Creating the collection %s mkcol.creator:The collection creator's e-mail address. mkcol.creator_undefined:ERROR: The creator was not defined. This variable is needed to recognise duplicate collection names. mkcol.desc:PERL script used to create the directory structure for a new Greenstone collection. mkcol.doing_replacements:doing replacements for %s mkcol.group:Create a new collection group instead of a standard collection. mkcol.gs3mode:Mode for Greenstone 3 collections. mkcol.long_colname:ERROR: The collection name must be less than 8 characters so compatibility with earlier filesystems can be maintained. mkcol.maintainer:The collection maintainer's email address (if different from the creator). mkcol.no_collectdir:ERROR: The collect dir doesn't exist: %s mkcol.no_collectdir_specified:ERROR: No collect dir was specified. In gs3mode, either the -site or -collectdir option must be specified. mkcol.no_colname:ERROR: No collection name was specified. mkcol.optionfile:Get options from file, useful on systems where long command lines may cause problems. mkcol.params:[options] collection-name mkcol.plugin:Perl plugin module to use (there may be multiple plugin entries). mkcol.public:If this collection has anonymous access. mkcol.public.true:Collection is public mkcol.public.false:Collection is private mkcol.quiet:Operate quietly. mkcol.site:In gs3mode, uses this site name with the GSDL3HOME environment variable to determine collectdir, unless -collectdir is specified. mkcol.success:The new collection was created successfully at %s mkcol.title:The title of the collection. mkcol.win31compat:Whether or not the named collection directory must conform to Windows 3.1 file conventions or not (i.e. 8 characters long). mkcol.win31compat.true:Directory name 8 characters or less mkcol.win31compat.false:Directory name any length # -- pluginfo.pl -- pluginfo.collection:Giving a collection name will make pluginfo.pl look in collect/collection-name/perllib/plugins first. If the plugin is not found there it will look in the general perllib/plugins directory. pluginfo.desc:Prints information about a plugin. pluginfo.general_options:General options are inherited from parent classes of the plugin. pluginfo.info:info pluginfo.no_plugin_name:ERROR: You must provide a plugin name. pluginfo.option_types:Plugins may take two types of options pluginfo.params:[options] plugin-name pluginfo.passing_options:Options may be passed to any plugin by including them in your collect.cfg configuration file. pluginfo.specific_options:Specific options are defined within the plugin itself, and are available only to this particular plugin. # -- plugoutinfo.pl -- plugoutinfo.collection:Giving a collection name will make plugoutinfo.pl look in collect/collection-name/perllib/plugouts first. If the plugout is not found there it will look in the general perllib/plugouts directory. plugoutinfo.desc:Prints information about a plugout. plugoutinfo.general_options:General options are inherited from parent classes of the plugout. plugoutinfo.info:info plugoutinfo.no_plugout_name:ERROR: You must provide a plugout name. plugoutinfo.option_types:Plugouts may take two types of options plugoutinfo.params:[options] plugout-name plugoutinfo.passing_options:Options may be passed to any plugout by including them in your collect.cfg configuration file. plugoutinfo.specific_options:Specific options are defined within the plugout itself, and are available only to this particular plugout. # # Classifier option descriptions # AllList.desc:Creates a single list of all documents. Use by the oaiserver. AZCompactList.allvalues:Use all metadata values found. AZCompactList.desc:Classifier plugin for sorting alphabetically (on a-zA-Z0-9). Produces a horizontal A-Z list, then a vertical list containing documents, or bookshelves for documents with common metadata. AZCompactList.doclevel:Level to process document at. AZCompactList.doclevel.top:Whole document. AZCompactList.doclevel.section:By sections. AZCompactList.firstvalueonly:Use only the first metadata value found. AZCompactList.freqsort:Sort by node frequency rather than alpha-numeric. AZCompactList.maxcompact:Maximum number of documents to be displayed per page. AZCompactList.metadata:A single Metadata field, or a comma separated list of Metadata fields, used for classification. If a list is specified, the first metadata type that has values will be used. May be used in conjunction with the -firstvalueonly and -allvalues flags, to select only the first value, or all metadata values from the list. AZCompactList.mincompact:Minimum number of documents to be displayed per page. AZCompactList.mingroup:The smallest value that will cause a group in the hierarchy to form. AZCompactList.minnesting:The smallest value that will cause a list to be converted into a nested list. AZCompactList.recopt:Used in nested metadata such as -metadata Year/Organisation. AZCompactList.sort:Metadata field to sort the leaf nodes by. AZCompactSectionList.desc:Variation on AZCompactList that classifies sections rather than documents. Entries are sorted by section-level metadata. AZList.desc:Classifier plugin for sorting alphabetically (on a-zA-Z0-9). Produces a horizontal A-Z list, with documents listed underneath. AZList.metadata:A single Metadata field or a comma separated list of Metadata fields used for classification. Following the order indicated by the list, the first field that contains a Metadata value will be used. List will be sorted by this element. AZSectionList.desc:Variation on AZList that classifies sections rather that documents. Entries are sorted by section-level metadata. BasClas.bad_general_option:The %s classifier uses an incorrect option. Check your collect.cfg configuration file. BasClas.builddir:Where to put the built indexes. BasClas.buttonname:The label for the classifier screen and button in navigation bar. The default is the metadata element specified with -metadata. BasClas.desc:Base class for all the classifiers. BasClas.no_metadata_formatting:Don't do any automatic metadata formatting (for sorting.) BasClas.outhandle:The file handle to write output to. BasClas.removeprefix:A prefix to ignore in metadata values when sorting. BasClas.removesuffix:A suffix to ignore in metadata values when sorting. BasClas.verbosity:Controls the quantity of output. 0=none, 3=lots. Browse.desc:A fake classifier that provides a link in the navigation bar to a prototype combined browsing and searching page. Only works for mgpp collections, and is only practical for small collections. DateList.bymonth:Classify by year and month instead of only year. DateList.desc:Classifier plugin for sorting by date. By default, sorts by 'Date' metadata. Dates are assumed to be in the form yyyymmdd or yyyy-mm-dd. DateList.metadata:The metadata that contains the dates to classify by. The format is expected to be yyyymmdd or yyyy-mm-dd. Can be a comma separated list, in which case the first date found will be used. DateList.reverse_sort:Sort the documents in reverse chronological order (newest first). DateList.nogroup:Make each year an individual entry in the horizontal list, instead of spanning years with few entries. (This can also be used with the -bymonth option to make each month a separate entry instead of merging). DateList.no_special_formatting:Don't display Year and Month information in the document list. DateList.sort:An extra metadata field to sort by in the case where two documents have the same date. GenericList.classify_sections:Classify sections instead of documents. GenericList.desc:A general and flexible list classifier with most of the abilities of AZCompactList, but with better Unicode, metadata and sorting capabilities. GenericList.metadata:Metadata fields used for classification. Use '/' to separate the levels in the hierarchy and ';' to separate metadata fields within each level. GenericList.partition_name_length:The length of the partition name; defaults to a variable length from 1 up to 3 characters, depending on how many are required to distinguish the partition start from its end. This option only applies when partition_type_within_level is set to 'constant_size'. GenericList.partition_size_within_level:The number of items in each partition (only applies when partition_type_within_level is set to 'constant_size'). GenericList.partition_type_within_level:The type of partitioning done: either 'per_letter', 'constant_size', or 'none'. GenericList.sort_leaf_nodes_using:Metadata fields used for sorting the leaf nodes. Use '|' to separate the metadata groups to stable sort and ';' to separate metadata fields within each group. GenericList.sort_using_unicode_collation:Sort using the Unicode Collation Algorithm. Requires http://www.unicode.org/Public/UCA/latest/allkeys.txt file to be downloaded into perl's lib/Unicode/Collate folder. GenericList.use_hlist_for:Metadata fields to use a hlist rather than a vlist. Use ',' to separate the metadata groups and ';' to separate the metadata fields within each group. HFileHierarchy.desc:Classifier plugin for generating hierarchical classifications based on a supplementary structure file. Hierarchy.desc:Classifier plugin for generating a hierarchical classification. This may be based on structured metadata, or may use a supplementary structure file (use the -hfile option). Hierarchy.documents_last:Display document nodes after classifier nodes. Hierarchy.hfile:Use the specified classification structure file. Hierarchy.hlist_at_top:Display the first level of the classification horizontally. Hierarchy.reverse_sort:Sort leaf nodes in reverse order (use with -sort). Hierarchy.separator:Regular expression used for the separator, if using structured metadata. Hierarchy.sort:Metadata field to sort leaf nodes by. Leaves will not be sorted if not specified. Hierarchy.suppressfirstlevel:Ignore the first part of the metadata value. This is useful for metadata where the first element is common, such as the import directory in gsdlsourcefilename. Hierarchy.suppresslastlevel:Ignore the final part of the metadata value. This is useful for metadata where each value is unique, such as file paths. HTML.desc:Creates an empty classification that's simply a link to a web page. HTML.url:The url of the web page to link to. List.bookshelf_type:Controls when to create bookshelves List.bookshelf_type.always:Create a bookshelf icon even if there is only one item in each group at the leaf nodes. List.bookshelf_type.never:Never create a bookshelf icon even if there are more than one item in each group at the leaf nodes. List.bookshelf_type.duplicate_only:Create a bookshelf icon only when there are more than one item in each group at the leaf nodes. List.desc:Simple list classifier plugin. List.level_partition.per_letter:Create a partition for each letter. List.level_partition.constant_size:Create partition with constant size. List.level_partition.per_letter_fixed_size:Create a partition per letter with approximately fixed size. List.metadata:A single Metadata field or a comma separated list of Metadata fields used for classification. Following the order indicated by the list, the first field that contains a Metadata value will be used. List will be sorted by this element, unless -sort is used. If no metadata is specified, then all documents will be included in the list, otherwise only documents that contain a metadata value will be included. List.sort:Metadata field to sort by. Use '-sort nosort' for no sorting. Phind.desc:Produces a hierarchy of phrases found in the text, which is browsable via an applet. Phind.language:Language or languages to use building hierarchy. Languages are identified by two-letter country codes like en (English), es (Spanish), and fr (French). Language is a regular expression, so 'en|fr' (English or French) and '..' (match any language) are valid. Phind.min_occurs:The minimum number of times a phrase must appear in the text to be included in the phrase hierarchy. Phind.savephrases:If set, the phrase infomation will be stored in the given file as text. It is probably a good idea to use an absolute path. Phind.suffixmode:The smode parameter to the phrase extraction program. A value of 0 means that stopwords are ignored, and of 1 means that stopwords are used. Phind.text:The text used to build the phrase hierarchy. Phind.thesaurus:Name of a thesaurus stored in Phind format in the collection's etc directory. Phind.title:The metadata field used to describe each document. Phind.untidy:Don't remove working files. RecentDocumentsList.desc:Classifier that gives a list of newly added or modified documents. RecentDocumentsList.include_docs_added_since:Include only documents modified or added after the specified date (in yyyymmdd or yyyy-mm-dd format). RecentDocumentsList.include_most_recently_added:Include only the specified number of most recently added documents. Only used if include_docs_added_since is not specified. RecentDocumentsList.sort:Metadata to sort List by. If not specified, list will be sorted by date of modification/addition. SectionList.desc:Same as List classifier but includes all sections of document (excluding top level) rather than just top level document itself. Collage.desc:An applet is used to display a collage of images found in the collection. Collage.geometry:The dimensions of the Collage canvas. For a canvas 600 pixels wide by 400 pixels high, for example, specify geometry as 600x400 Collage.maxDepth:Images for collaging are drawn from mirroring the underlying browse classifier. This controls the maximum depth of the mirroring process. Collage.maxDisplay:The maximum number of images to show in the collage at any one time. Collage.imageType:Used to control, by expressing file name extensions, which file types are used in the collage. A list of file name extensions is separated by the percent (%%) symbol. Collage.bgcolor:The background color of the collage canvas, specified in hexadecimal form (for example #008000 results in a forest green background). Collage.buttonname:The label for the classifier screen and button in navigation bar. Collage.refreshDelay:Rate, in milliseconds, that the collage canvas is refreshed. Collage.isJava2:Used to control which run-time classes of Java are used. More advanced version of Java (i.e. Java 1.2 onwards) include more sophisticated support for controlling transparency in images, this flag helps control what happens, however the built-in Java runtime for some browsers is version 1.1. The applet is designed to, by default, auto-detect which version of Java the browser is running and act accordingly. Collage.imageMustNotHave:Used to suppress images that should not appear in the collage, such as image buttons that make up the navigation bar. Collage.caption:Optional captions to display below the collage canvas. # # Plugin option descriptions # AcronymExtractor.adding:adding AcronymExtractor.already_seen:already seen AcronymExtractor.desc:Helper extractor plugin for location and marking up acronyms in text. AcronymExtractor.done_acronym_extract:done extracting acronyms. AcronymExtractor.done_acronym_markup:done acronym markup. AcronymExtractor.extract_acronyms:Extract acronyms from within text and set as metadata. AcronymExtractor.extracting_acronyms:extracting acronyms AcronymExtractor.marking_up_acronyms:marking up acronyms AcronymExtractor.markup_acronyms:Add acronym metadata into document text. ArchivesInfPlugin.desc:Plugin which reads through an archives.inf file (i.e. the file generated in the archives directory when an import is done), processing each file it finds. AutoExtractMetadata.desc: Base plugin that brings together all the extractor functionality from the Extractor plugins. AutoExtractMetadata.extracting:extracting AutoExtractMetadata.first:Comma separated list of first sizes to extract from the text into a metadata field. The field is called 'FirstNNN'. BaseMediaConverter.desc:Helper plugin that provides base functionality for media converter plugins such as ImageConverter and video converters. BasePlugin.associate_ext:Causes files with the same root filename as the document being processed by the plugin AND a filename extension from the comma separated list provided by this argument to be associated with the document being processed rather than handled as a separate list. BasePlugin.associate_tail_re:A regular expression to match filenames against to find associated files. Used as a more powerful alternative to associate_ext. BasePlugin.block_exp:Files matching this regular expression will be blocked from being passed to any later plugins in the list. This has no real effect other than to prevent lots of warning messages about input files you don't care about. Each plugin might have a default block_exp. e.g. by default HTMLPlugin blocks any files with .gif, .jpg, .jpeg, .png or .css file extensions. BasePlugin.desc:Base class for all the import plugins. BasePlugin.dummy_text:This document has no text. BasePlugin.encoding.ascii:Plain 7 bit ascii. This may be a bit faster than using iso_8859_1. Beware of using this when the text may contain characters outside the plain 7 bit ascii set though (e.g. German or French text containing accents), use iso_8859_1 instead. BasePlugin.encoding.unicode:Just unicode. BasePlugin.encoding.utf8:Either utf8 or unicode -- automatically detected. BasePlugin.filename_encoding:The encoding of the source file filenames. BasePlugin.filename_encoding.auto:Automatically detect the encoding of the filename. BasePlugin.filename_encoding.auto_language_analysis:Auto-detect the encoding of the filename by analysing it. BasePlugin.filename_encoding.auto_filesystem_encoding:Auto-detect the encoding of the filename using filesystem encoding. BasePlugin.filename_encoding.auto_fl:Uses filesystem encoding then language analysis to detect the filename encoding. BasePlugin.filename_encoding.auto_lf:Uses language analysis then filesystem encoding to detect the filename encoding. BasePlugin.no_blocking:Don't do any file blocking. Any associated files (e.g. images in a web page) will be added to the collection as documents in their own right. BasePlugin.no_cover_image:Do not look for a prefix.jpg file (where prefix is the same prefix as the file being processed) and associate it as a cover image. BasePlugin.OIDtype.auto:Use OIDtype set in import.pl BasePlugin.process_exp:A perl regular expression to match against filenames. Matching filenames will be processed by this plugin. For example, using '(?i).html?\$' matches all documents ending in .htm or .html (case-insensitive). BasePlugin.smart_block:Block files in a smarter way than just looking at filenames. BasePlugin.stems:stems BasePlugin.file_rename_method:The method to be used in renaming the copy of the imported file and associated files. BasePlugin.rename_method.url:Use url encoding in renaming imported files and associated files. BasePlugin.rename_method.base64:Use base64 encoding in renaming imported files and associated files. BasePlugin.rename_method.none:Don't rename imported files and associated files. BibTexPlugin.desc:BibTexPlugin reads bibliography files in BibTex format. BibTexPlugin creates a document object for every reference in the file. It is a subclass of SplitTextFile, so if there are multiple records, all are read. BookPlugin.desc:Creates multi-level document from document containing <> level tags. Metadata for each section is taken from any other tags on the same line as the <>. e.g. <>xxxx<> sets Title metadata. Everything else between TOC tags is treated as simple html (i.e. no processing of html links or any other HTMLPlugin type stuff is done). Expects input files to have a .hb file extension by default (this can be changed by adding a -process_exp option a file with the same name as the hb file but a .jpg extension is taken as the cover image (jpg files are blocked by this plugin). BookPlugin is a simplification (and extension) of the HBPlugin used by the Humanity Library collections. BookPlugin is faster as it expects the input files to be cleaner (The input to the HDL collections contains lots of excess html tags around <> tags, uses <> tags to specify images, and simply takes all text between <> tags and start of text to be Title metadata). If you're marking up documents to be displayed in the same way as the HDL collections, use this plugin instead of HBPlugin. CONTENTdmPlugin.desc:Plugin that processes RDF files in exported CONTENTdm collections. ConvertBinaryFile.apply_fribidi:Run the "fribidi" Unicode Bidirectional Algorithm program over the converted file (for right-to-left text). ConvertBinaryFile.convert_to:Plugin converts to TEXT or HTML or various types of Image (e.g. JPEG, GIF, PNG). ConvertBinaryFile.convert_to.auto:Automatically select the format converted too. Format chosen depends on input document type, for example Word will automatically be converted to HTML, whereas PowerPoint will be converted to Greenstone's PagedImage format. ConvertBinaryFile.convert_to.html:HTML format. ConvertBinaryFile.convert_to.text:Plain text format. ConvertBinaryFile.convert_to.pagedimg:A series of images. ConvertBinaryFile.convert_to.pagedimg_jpg:A series of images in JPEG format. ConvertBinaryFile.convert_to.pagedimg_gif:A series of images in GIF format. ConvertBinaryFile.convert_to.pagedimg_png:A series of images in PNG format. ConvertBinaryFile.desc:This plugin is inherited by such plugins as WordPlugin, PowerPointPlugin, PostScriptPlugin, RTFPlugin and PDFPlugin. It facilitates the conversion of these document types to either HTML, TEXT or a series of images. It works by dynamically loading an appropriate secondary plugin (HTMLPlugin, StructuredHTMLPlugin, PagedImagePlugin or TextPlugin) based on the plugin argument 'convert_to'. ConvertBinaryFile.keep_original_filename:Keep the original filename for the associated file, rather than converting to doc.pdf, doc.doc etc. ConvertBinaryFile.use_strings:If set, a simple strings function will be called to extract text if the conversion utility fails. ConvertToRogPlugin.desc:A plugin that inherits from RogPlugin. CSVPlugin.desc:A plugin for files in comma-separated value format. A new document will be created for each line of the file. DateExtractor.desc:Helper extractor plugin for extracting historical date information from text. DateExtractor.extract_historical_years:Extract time-period information from historical documents. This is stored as metadata with the document. There is a search interface for this metadata, which you can include in your collection by adding the statement, "format QueryInterface DateSearch" to your collection configuration file. DateExtractor.maximum_century:The maximum named century to be extracted as historical metadata (e.g. 14 will extract all references up to the 14th century). DateExtractor.maximum_year:The maximum historical date to be used as metadata (in a Common Era date, such as 1950). DateExtractor.no_bibliography:Do not try to block bibliographic dates when extracting historical dates. DirectoryPlugin.desc:A plugin which recurses through directories processing each file it finds. DirectoryPlugin.recheck_directories:After the files in an import directory have been processed, re-read the directory to discover any new files created. DirectoryPlugin.use_metadata_files:(DEPRECATED - Add MetadataXMLPlugin to the list of plugins instead) Read metadata from metadata XML files. DatabasePlugin.desc:A plugin that imports records from a database. This uses perl's DBI module, which includes back-ends for mysql, postgresql, comma separated values (CSV), MS Excel, ODBC, sybase, etc... Extra modules may need to be installed to use this. See /etc/packages/example.dbi for an example config file. DSpacePlugin.desc:A plugin that takes a collection of documents exported from DSpace and imports them into Greenstone. DSpacePlugin.first_inorder_ext: This is used to identify the primary stream of DSpace collection document. With this option, the system will treat the defined ext types of document in sequence to look for the possible primary stream. DSpacePlugin.first_inorder_mime:This is used to identify the primary data stream of DSpace collection document.With this option, the system will treat the defined mime types of document in sequence to look for the possible primary stream. DSpacePlugin.only_first_doc:This is used to identify the primary data stream of DSpace collection document.With this option, the system will treat the first document in dublic_core metadata file as the possible primary stream. EmailAddressExtractor.desc:Helper extractor plugin for discovering email addresses in text. EmailAddressExtractor.done_email_extract:done extracting e-mail addresses. EmailAddressExtractor.extracting_emails:extracting e-mail addresses EmailAddressExtractor.extract_email:Extract email addresses as metadata. EmailPlugin.desc:A plugin that reads email files. These are named with a simple number (i.e. as they appear in maildir folders) or with the extension .mbx (for mbox mail file format).\nDocument text: The document text consists of all the text after the first blank line in the document.\nMetadata (not Dublin Core!):\n\t\$Headers All the header content (optional, not stored by default)\n\t\$Subject Subject: header\n\t\$To To: header\n\t\$From From: header\n\t\$FromName Name of sender (where available)\n\t\$FromAddr E-mail address of sender\n\t\$DateText Date: header\n\t\$Date Date: header in GSDL format (eg: 19990924) EmailPlugin.no_attachments:Do not save message attachments. EmailPlugin.headers:Store email headers as "Headers" metadata. EmailPlugin.OIDtype.message_id:Use the message identifier as the document OID. If no message identifier is found, then will use a hash OID. EmailPlugin.split_exp:A perl regular expression used to split files containing many messages into individual documents. ExcelPlugin.desc:A plugin for importing Microsoft Excel files (versions 95 and 97). FavouritesPlugin.desc:Plugin to process Internet Explorer Favourites files. FOXPlugin.desc:Plugin to process a Foxbase dbt file. This plugin provides the basic functionality to read in the dbt and dbf files and process each record. This general plugin should be overridden for a particular database to process the appropriate fields in the file. GreenstoneXMLPlugin.desc:Processes Greenstone Archive XML documents. Note that this plugin does no syntax checking (though the XML::Parser module tests for well-formedness). It's assumed that the Greenstone Archive files conform to their DTD. GISExtractor.desc:Helper extractor plugin for extracting placenames from text. Requires GIS extension to Greenstone. GISExtractor.extract_placenames:Extract placenames from within text and set as metadata. Requires GIS extension to Greenstone. GISExtractor.gazetteer:Gazetteer to use to extract placenames from within text and set as metadata. Requires GIS extension to Greenstone. GISExtractor.place_list:When extracting placements, include list of placenames at start of the document. Requires GIS extension to Greenstone. HBPlugin.desc:Plugin which processes an HTML book directory. This plugin is used by the Humanity Library collections and does not handle input encodings other than ascii or extended ascii. This code is kind of ugly and could no doubt be made to run faster, by leaving it in this state I hope to encourage people to make their collections use BookPlugin instead ;-)\n\nUse BookPlugin if creating a new collection and marking up files like the Humanity Library collections. BookPlugin accepts all input encodings but expects the marked up files to be cleaner than those used by the Humanity Library collections HBPlugin.encoding.iso_8859_1:Latin1 (western languages) HTMLImagePlugin.aggressiveness:Range of related text extraction techniques to use. HTMLImagePlugin.aggressiveness.1:Filename, path, ALT text only. HTMLImagePlugin.aggressiveness.2:All of 1, plus caption where available. HTMLImagePlugin.aggressiveness.3:All of 2, plus near paragraphs where available. HTMLImagePlugin.aggressiveness.4:All of 3, plus previous headers (

,

...) where available. HTMLImagePlugin.aggressiveness.5:All of 4, plus textual references where available. HTMLImagePlugin.aggressiveness.6:All of 4, plus page metatags (title, keywords, etc). HTMLImagePlugin.aggressiveness.7:All of 6, 5 and 4 combined. HTMLImagePlugin.aggressiveness.8:All of 7, plus repeat caption, filename, etc (raise ranking of more relevant results). HTMLImagePlugin.aggressiveness.9:All of 1, plus full text of source page. HTMLImagePlugin.caption_length:Maximum length of captions (in characters). HTMLImagePlugin.convert_params:Additional parameters for ImageMagicK convert on thumbnail creation. For example, '-raise' will give a three dimensional effect to thumbnail images. HTMLImagePlugin.desc:A plugin for extracting images and associated text from webpages. HTMLImagePlugin.document_text:Add image text as document:text (otherwise IndexedText metadata field). HTMLImagePlugin.index_pages:Index the pages along with the images. Otherwise reference the pages at the source URL. HTMLImagePlugin.max_near_text:Maximum characters near images to extract. HTMLImagePlugin.min_height:Pixels. Skip images shorter than this. HTMLImagePlugin.min_near_text:Minimum characters of near text or caption to extract. HTMLImagePlugin.min_size:Bytes. Skip images smaller than this. HTMLImagePlugin.min_width:Pixels. Skip images narrower than this. HTMLImagePlugin.neartext_length:Target length of near text (in characters). HTMLImagePlugin.no_cache_images:Don't cache images (point to URL of original). HTMLImagePlugin.smallpage_threshold:Images on pages smaller than this (bytes) will have the page (title, keywords, etc) meta-data added. HTMLImagePlugin.textrefs_threshold:Threshold for textual references. Lower values mean the algorithm is less strict. HTMLImagePlugin.thumb_size:Max thumbnail size. Both width and height. HTMLPlugin.assoc_files:Perl regular expression of file extensions to associate with html documents. HTMLPlugin.desc:This plugin processes HTML files HTMLPlugin.description_tags:Split document into sub-sections where
tags occur. '-keep_head' will have no effect when this option is set. HTMLPlugin.extract_style:Extract style and script information from the HTML tag and save as DocumentHeader metadata. This will be set in the document page as the _document:documentheader_ macro. HTMLPlugin.file_is_url:Set if input filenames make up url of original source documents e.g. if a web mirroring tool was used to create the import directory structure. HTMLPlugin.hunt_creator_metadata:Find as much metadata as possible on authorship and place it in the 'Creator' field. HTMLPlugin.keep_head:Don't remove headers from html files. HTMLPlugin.metadata_fields:Comma separated list of metadata fields to attempt to extract. Use 'tag' to have the contents of the first pair put in a metadata element called 'tagname'. Capitalise this as you want the metadata capitalised in Greenstone, since the tag extraction is case insensitive. HTMLPlugin.no_metadata:Don't attempt to extract any metadata from files. HTMLPlugin.no_strip_metadata_html:Comma separated list of metadata names, or 'all'. Used with -description_tags, it prevents stripping of HTML tags from the values for the specified metadata. HTMLPlugin.nolinks:Don't make any attempt to trap links (setting this flag may improve speed of building/importing but any relative links within documents will be broken). HTMLPlugin.no_image_links:Don't make any attempt to trap image links to allow view images. HTMLPlugin.rename_assoc_files:Renames files associated with documents (e.g. images). Also creates much shallower directory structure (useful when creating collections to go on cd-rom). HTMLPlugin.sectionalise_using_h_tags:Automatically create a sectioned document using h1, h2, ... hX tags. HTMLPlugin.title_sub:Substitution expression to modify string stored as Title. Used by, for example, PDFPlugin to remove "Page 1", etc from text used as the title. HTMLPlugin.tidy_html:If set, converts an HTML document into a well-formed XHTML to enable users view the document in the book format. HTMLPlugin.old_style_HDL:To mark whether the file in this collection used the old HDL document's tags style. BaseMediaConverter.enable_cache:Cache automatically generated files (such as thumbnails and screen-size images) so they don't need to be repeatedly generated. ImageConverter.converttotype:Convert main image to format 's'. ImageConverter.create_screenview:If set to true, create a screen sized image, and set Screen, ScreenType, screenicon, ScreenWidth, ScreenHeight metadata. ImageConverter.create_thumbnail:If set to true, create a thumbnail version of each image, and add Thumb, ThumbType, thumbicon, ThumbWidth, ThumbHeight metadata. ImageConverter.desc:Helper plugin for image conversion using ImageMagick. ImageConverter.imagemagicknotinstalled:Image Magick not installed ImageConverter.minimumsize:Ignore images smaller than n bytes. ImageConverter.noconversionavailable:Image conversion not available ImageConverter.noscaleup:Don't scale up small images when making thumbnails. ImageConverter.screenviewsize:Make screenview images of size nxn. ImageConverter.screenviewtype:Make screenview images in format 's'. ImageConverter.thumbnailsize:Make thumbnails of size nxn. ImageConverter.thumbnailtype:Make thumbnails in format 's'. ImageConverter.win95notsupported: Image Magick not supported on Win95/98 ImagePlugin.desc:This plugin processes images, adding basic metadata IndexPlugin.desc:This recursive plugin processes an index.txt file. The index.txt file should contain the list of files to be included in the collection followed by any extra metadata to be associated with each file.\n\nThe index.txt file should be formatted as follows: The first line may be a key (beginning with key:) to name the metadata fields (e.g. key: Subject Organization Date). The following lines will contain a filename followed by the value that metadata entry is to be set to. (e.g. 'irma/iw097e 3.2 unesco 1993' will associate the metadata Subject=3.2, Organization=unesco, and Date=1993 with the file irma/iw097e if the above key line was used)\n\nNote that if any of the metadata fields use the Hierarchy classifier plugin then the value they're set to should correspond to the first field (the descriptor) in the appropriate classification file.\n\nMetadata values may be named separately using a tag (e.g. 3.2) and this will override any name given to them by the key line. If there's no key line any unnamed metadata value will be named 'Subject'. ISISPlugin.desc:This plugin processes CDS/ISIS databases. For each CDS/ISIS database processed, three files must exist in the collection's import folder: the Master file (.mst), the Field Definition Table (.fdt), and the Cross-Reference File (.xrf). ISISPlugin.subfield_separator:The string used to separate subfields in CDS/ISIS database records. ISISPlugin.entry_separator:The string used to separate multiple values for single metadata fields in CDS/ISIS database records. KeyphraseExtractor.desc:Helper extractor plugin for generating keyphrases from text. Uses Kea keyphrase extraction system. KeyphraseExtractor.extract_keyphrases:Extract keyphrases automatically with Kea (default settings). KeyphraseExtractor.extract_keyphrases_kea4:Extract keyphrases automatically with Kea 4.0 (default settings). Kea 4.0 is a new version of Kea that has been developed for controlled indexing of documents in the domain of agriculture. KeyphraseExtractor.extract_keyphrase_options:Options for keyphrase extraction with Kea. For example: mALIWEB - use ALIWEB extraction model; n5 - extract 5 keyphrase;, eGBK - use GBK encoding. KeyphraseExtractor.keyphrases:keyphrases KeyphraseExtractor.missing_kea:Error: The Kea software could not be found at %s. Please download Kea %s from http://www.nzdl.org/Kea and install it in this directory. LaTeXPlugin.desc:Plugin for LaTeX documents. LOMPlugin.desc:Plugin for importing LOM (Learning Object Metadata) files. LOMPlugin.root_tag:The DocType of the XML file (or a regular expression that matches the root element). LOMPlugin.check_timestamp:Check timestamps of previously downloaded files, and only download again if source file is newer. LOMPlugin.download_srcdocs:Download the source document if one is specified (in general^identifier^entry or technical^location). This option should specify a regular expression to match filenames against before downloading. Note, this currently doesn't work for documents outside a firewall. MARCPlugin.desc:Basic MARC plugin. MARCPlugin.metadata_mapping:Name of file that includes mapping details from MARC values to Greenstone metadata names. Defaults to 'marc2dc.txt' found in the site's etc directory. MARCXMLPlugin.desc:MARCXML plugin. MARCXMLPlugin.metadata_mapping_file:Name of file that includes mapping details from MARC values to Greenstone metadata names. Defaults to 'marc2dc.txt' found in the site's etc directory. MediaWikiPlugin.desc:Plugin for importing MediaWiki web pages MediaWikiPlugin.show_toc: Add to the collection's About page the 'table of contents' on the MediaWiki website's main page. Needs to specify a Perl regular expression in toc_exp below to match the 'table of contents' section. MediaWikiPlugin.delete_toc:Delete the 'table of contents' section on each HTML page. Needs to specify a Perl regular expression in toc_exp below to match the 'table of contents' section. MediaWikiPlugin.toc_exp:A Perl regular expression to match the 'table of contents'. The default value matches common MediaWiki web pages. MediaWikiPlugin.delete_nav:Delete the navigation section. Needs to specify a Perl regular expression in nav_div_exp below. MediaWikiPlugin.nav_div_exp:A Perl regular expression to match the navigation section. The default value matches common MediaWiki web pages. MediaWikiPlugin.delete_searchbox:Delete the searchbox section. Needs to specify a Perl regular expression in searchbox_div_exp below. MediaWikiPlugin.searchbox_div_exp:A Perl regular expression to match the searchbox section. The default value matches common MediaWiki web pages. MediaWikiPlugin.remove_title_suffix_exp:A Perl regular expression to trim the extracted title. For example, \\s-(.+) will trim title contents after "-". MetadataCSVPlugin.desc:A plugin for metadata in comma-separated value format. The Filename field in the CSV file is used to determine which document the metadata belongs to. MetadataPass.desc:On-the-side base class to BasePlugin that supports metadata plugins utilise metadata_read pass of import.pl MetadataXMLPlugin.desc:Plugin that processes metadata.xml files. MetadataEXIFPlugin.desc:Plugin that extracts EXIF metadata from images, audio and video. More specifically, it is based on the CPAN module 'ExifTool'. This actually supports many more formats then the name suggests (e.g. GPS, XMP, FlashPix, ID3, Vorbis). The plugin benefits from this, and has been designed to support all the formats ExifTool supports. See ExifTool documentation for the file types and metadata schemes supported. GreenstoneMETSPlugin.desc:Process Greenstone-style METS documents MP3Plugin.desc:Plugin for processing MP3 files. MP3Plugin.assoc_images:Use Google image search to locate images related to MP3 file based on ID3 Title and Artist metadata. MP3Plugin.applet_metadata:Used to store [applet] metadata for each document that contains the necessary HTML for an MP3 audio player applet to play that file. MP3Plugin.metadata_fields:Comma separated list of metadata fields to extract (assuming present) in an MP3 file. Use \"*\" to extract all the fields. NulPlugin.desc:Dummy (.nul) file plugin. Used with the files produced by exploding metadata database files. NulPlugin.assoc_field:Name of a metadata field that will be set for each nul file. NulPlugin.add_metadata_as_text:Add a table of metadata as the text of the document, rather than "This document has no text". NulPlugin.remove_namespace_for_text:Remove namepsaces from metadata names in the document text (if add_metadata_as_text is set). OAIPlugin.desc:Basic Open Archives Initiative (OAI) plugin. OAIPlugin.document_field:The metadata element specifying the file name of documents to attach the metadata to. OAIPlugin.metadata_set:Metadata set (namespace prefix) to import all metadata as OAIPlugin.metadata_set.auto:Use the prefixes specified in the OAI record OAIPlugin.metadata_set.dc: Use the dc prefix. Will map qualified dc elements into their Greenstone form, eg spatial becomes dc.Coverage^spatial. OggVorbisPlugin.add_technical_metadata:Add technical (eg. bitrate) metadata. OggVorbisPlugin.desc:A plugin for importing Ogg Vorbis audio files. OpenDocumentPlugin.desc:Plugin for OASIS OpenDocument format documents (used by OpenOffice 2.0) PagedImagePlugin.desc:Plugin for documents made up of a sequence of images, with optional OCR text for each image. This plugin processes .item files which list the sequence of image and text files, and provide metadata. PagedImagePlugin.documenttype:Set the document type (used for display) PagedImagePlugin.documenttype.paged:Paged documents have next and previous arrows and a 'go to page X' box PagedImagePlugin.documenttype.hierarchy:Hierarchical documents have a table of contents PagedImagePlugin.headerpage:Add a top level header page (that contains no image) to each document. PDFPlugin.allowimagesonly:Allow PDF files with no extractable text. Avoids the need to have -complex set. Only useful with convert_to html. PDFPlugin.complex:Create more complex output. With this option set the output html will look much more like the original PDF file. For this to function properly you Ghostscript installed (for *nix gs should be on your path while for windows you must have gswin32c.exe on your path). PDFPlugin.desc:Plugin that processes PDF documents. PDFPlugin.nohidden:Prevent pdftohtml from attempting to extract hidden text. This is only useful if the -complex option is also set. PDFPlugin.noimages:Don't attempt to extract images from PDF. PDFPlugin.use_sections:Create a separate section for each page of the PDF file. PDFPlugin.zoom:The factor by which to zoom the PDF for output (this is only useful if -complex is set). PostScriptPlugin.desc:This is a \"poor man's\" ps to text converter. If you are serious, consider using the PRESCRIPT package, which is available for download at http://www.nzdl.org/html/software.html PostScriptPlugin.extract_date:Extract date from PS header. PostScriptPlugin.extract_pages:Extract pages from PS header. PostScriptPlugin.extract_title:Extract title from PS header. PowerPointPlugin.desc:A plugin for importing Microsoft PowerPoint files. PowerPointPlugin.windows_scripting:Use MicroSoft Windows scripting technology (Visual Basic for Applications) to get PPT to convert document to various image types (e.g. JPEG,PNG,GIF) rather than rely on the open source package ppttohtml. PrintInfo.bad_general_option:The %s plugin uses an incorrect option. Check your collect.cfg configuration file. PrintInfo.desc:Most base plugin, handles printing info (using pluginfo.pl) and parsing of the arguments. ProCitePlugin.desc:A plugin for (exported) ProCite databases ProCitePlugin.entry_separator:The string used to separate multiple values for single metadata fields in ProCite database records. ReadTextFile.could_not_extract_encoding:WARNING: encoding could not be extracted from %s - defaulting to %s ReadTextFile.could_not_extract_language:WARNING: language could not be extracted from %s - defaulting to %s ReadTextFile.could_not_open_for_reading:could not open %s for reading ReadTextFile.default_encoding:Use this encoding if -input_encoding is set to 'auto' and the text categorization algorithm fails to extract the encoding or extracts an encoding unsupported by Greenstone. This option can take the same values as -input_encoding. ReadTextFile.default_language:If Greenstone fails to work out what language a document is the 'Language' metadata element will be set to this value. The default is 'en' (ISO 639 language symbols are used: en = English). Note that if -input_encoding is not set to 'auto' and -extract_language is not set, all documents will have their 'Language' metadata set to this value. ReadTextFile.desc:Base plugin for files that are plain text. ReadTextFile.empty_file:file contains no text ReadTextFile.extract_language:Identify the language of each document and set 'Language' metadata. Note that this will be done automatically if -input_encoding is 'auto'. ReadTextFile.file_has_no_text:ERROR: %s contains no text ReadTextFile.input_encoding:The encoding of the source documents. Documents will be converted from these encodings and stored internally as utf8. ReadTextFile.input_encoding.auto:Use text categorization algorithm to automatically identify the encoding of each source document. This will be slower than explicitly setting the encoding but will work where more than one encoding is used within the same collection. ReadTextFile.read_denied:Read permission denied for %s ReadTextFile.separate_cjk:Insert spaces between Chinese/Japanese/Korean characters to make each character a word. Use if text is not segmented. ReadTextFile.unsupported_encoding:WARNING: %s appears to be encoded in an unsupported encoding (%s) - using %s ReadTextFile.wrong_encoding:WARNING: %s was read using %s encoding but appears to be encoded as %s. ReadXMLFile.desc:Base class for XML plugins. ReadXMLFile.xslt:Transform a matching input document with the XSLT in the named file. A relative filename is assumed to be in the collection's file area, for instance etc/mods2dc.xsl. RealMediaPlugin.desc:A plugin for processing Real Media files. ReferPlugin.desc:ReferPlugin reads bibliography files in Refer format. RogPlugin.desc:Creates simple single-level documents from .rog or .mdb files. RTFPlugin.desc:Plugin for importing Rich Text Format files. SourceCodePlugin.desc:Filename is currently used for Title ( optionally minus some prefix ). Current languages:\ntext: READMEs/Makefiles\nC/C++ (currently extracts #include statements and C++ class decls)\nPerl (currently only done as text)\nShell (currently only done as text) SourceCodePlugin.remove_prefix:Remove this leading pattern from the filename (eg -remove_prefix /tmp/XX/src/). The default is to remove the whole path from the filename. SplitTextFile.desc:SplitTextFile is a plugin for splitting input files into segments that will then be individually processed. This plugin should not be called directly. Instead, if you need to process input files that contain several documents, you should write a plugin with a process function that will handle one of those documents and have it inherit from SplitTextFile. See ReferPlugin for an example. SplitTextFile.split_exp:A perl regular expression to split input files into segments. StructuredHTMLPlugin.desc:A plugin to process structured HTML documents, splitting them into sections based on style information. StructuredHTMLPlugin.delete_toc:Remove any table of contents, list of figures etc from the converted HTML file. Styles for these are specified by the toc_header option. StructuredHTMLPlugin.title_header:possible user-defined styles for the title header. StructuredHTMLPlugin.level1_header:possible user-defined styles for the level1 header in the HTML document (equivalent to

). StructuredHTMLPlugin.level2_header:possible user-defined styles for the level2 header in the HTML document (equivalent to

). StructuredHTMLPlugin.level3_header:possible user-defined styles for the level3 header in the HTML document (equivalent to

). StructuredHTMLPlugin.toc_header:possible user-defined header styles for the table of contents, table of figures etc, to be removed if delete_toc is set. TextPlugin.desc:Creates simple single-level document. Adds Title metadata of first line of text (up to 100 characters long). TextPlugin.title_sub:Substitution expression to modify string stored as Title. Used by, for example, PostScriptPlugin to remove "Page 1" etc from text used as the title. UnknownPlugin.assoc_field:Name of the metadata field that will hold the associated file's name. UnknownPlugin.desc:This is a simple Plugin for importing files in formats that Greenstone doesn't know anything about. A fictional document will be created for every such file, and the file itself will be passed to Greenstone as the \"associated file\" of the document. UnknownPlugin.file_format:Type of the file (e.g. MPEG, MIDI, ...) UnknownPlugin.mime_type:Mime type of the file (e.g. image/gif). UnknownPlugin.process_extension:Process files with this file extension. This option is an alternative to process_exp that is simpler to use but less flexible. UnknownPlugin.srcicon:Specify a macro name (without underscores) to use as srcicon metadata. WordPlugin.desc:A plugin for importing Microsoft Word documents. WordPlugin.windows_scripting:Use MicroSoft Windows scripting technology (Visual Basic for Applications) to get Word to convert document to HTML rather than rely on the open source package WvWare. Causes Word application to open on screen if not already running. WordPlugin.metadata_fields: This is to retrieve metadata from the HTML document converted by VB scripting. It allows users to define comma separated list of metadata fields to attempt to extract. Use 'tag' to have the contents of the first pair put in a metadata element called 'tagname'. Capitalise this as you want the metadata capitalised in Greenstone, since the tag extraction is case insensitive ZIPPlugin.desc:Plugin which handles compressed and/or archived input formats currently handled formats and file extensions are:\ngzip (.gz, .z, .tgz, .taz)\nbzip (.bz)\nbzip2 (.bz2)\nzip (.zip .jar)\ntar (.tar)\n\nThis plugin relies on the following utilities being present (if trying to process the corresponding formats):\ngunzip (for gzip)\nbunzip (for bzip)\nbunzip2 \nunzip (for zip)\ntar (for tar) # # Download module option descriptions # BaseDownload.desc:Base class for Download modules BaseDownload.bad_general_option:The %s download module uses an incorrect option. MediaWikiDownload.desc:A module for downloading from MediaWiki websites MediaWikiDownload.reject_filetype:Ignore url list, separate by comma, e.g.*cgi-bin*,*.ppt ignores hyperlinks that contain either 'cgi-bin' or '.ppt' MediaWikiDownload.reject_filetype_disp:Ignore URL patterns MediaWikiDownload.exclude_directories:List of exclude directories (must be absolute path to the directory), e.g. /people,/documentation will exclude the 'people' and 'documentation' subdirectory under the currently crawling site. MediaWikiDownload.exclude_directories_disp:Exclude directories OAIDownload.desc:A module for downloading from OAI repositories OAIDownload.url_disp:Source URL OAIDownload.url:OAI repository URL OAIDownload.set_disp:Restrict to set OAIDownload.set:Restrict the download to the specified set in the repository OAIDownload.metadata_prefix_disp:Metadata prefix OAIDownload.metadata_prefix:The metadata format used in the exported, e.g. oai_dc, qdc, etc. Press the button to find out what formats are supported. OAIDownload.get_doc_disp:Get document OAIDownload.get_doc:Download the source document if one is specified in the record OAIDownload.get_doc_exts_disp:Only include file types OAIDownload.get_doc_exts:Permissible filename extensions of documents to get OAIDownload.max_records_disp:Max records OAIDownload.max_records:Maximum number of records to download SRWDownload.desc:A module for downloading from SRW (Search/Retrieve Web Service) repositories WebDownload.desc:A module for downloading from the Internet via HTTP or FTP WebDownload.url:Source URL. In case of http redirects, this value may change WebDownload.url_disp:Source URL WebDownload.depth:How many hyperlinks deep to go when downloading WebDownload.depth_disp:Download Depth WebDownload.below:Only mirror files below this URL WebDownload.below_disp:Only files below URL WebDownload.within:Only mirror files within the same site WebDownload.within_disp:Only files within site WebDownload.html_only:Download only HTML files, and ignore associated files e.g images and stylesheets WebDownload.html_only_disp:Only HTML files WgetDownload.desc: Base class that handles calls to wget WgetDownload.proxy_on:Proxy on WgetDownload.proxy_host:Proxy host WgetDownload.proxy_port:Proxy port WgetDownload.user_name:User name WgetDownload.user_password:User password Z3950Download.desc:A module for downloading from Z3950 repositories Z3950Download.host:Host URL Z3950Download.host_disp:Host Z3950Download.port:Port number of the repository Z3950Download.port_disp:Port Z3950Download.database:Database to search for records in Z3950Download.database_disp:Database Z3950Download.find:Retrieve records containing the specified search term Z3950Download.find_disp:Find Z3950Download.max_records:Maximum number of records to download Z3950Download.max_records_disp:Max Records # #Plugout option descriptions # BasPlugout.bad_general_option:The %s plugout uses an incorrect option. BasPlugout.debug:set debugging mode BasPlugout.desc:Base class for all the export plugouts. BasPlugout.group_size:Number of documents to group into one XML file. BasPlugout.gzip_output:Use gzip to compress resulting xml documents (don't forget to include ZIPPlugin in your plugin list when building from compressed documents). BasPlugout.output_handle: the file descriptor used to send output information BasPlugout.output_info:the reference to an arcinfo object used to store information about the archives. BasPlugout.verbosity:Controls the quantity of output. 0=none, 3=lots. BasPlugout.xslt_file:Transform a document with the XSLT in the named file. DSpacePlugout.desc:DSpace Archive format. FedoraMETSPlugout.desc:METS format using the Fedora profile. FedoraMETSPlugout.fedora_namespace:The prefix used in Fedora for process ids (PIDS) e.g. greenstone:HASH0122efe4a2c58d0 GreenstoneXMLPlugout.desc:Greenstone XML Archive format. GreenstoneMETSPlugout.desc:METS format using the Greenstone profile. MARCXMLPlugout.desc:MARC xml format. MARCXMLPlugout.group:Output the marc xml records into a single file. MARCXMLPlugout.mapping_file:Use the named mapping file for the transformation. METSPlugout.desc:Superclass plugout for METS format. Provides common functionality for profiles such as GreenstoneMETS and FedoraMETS and key abstract methods. METSPlugout.xslt_txt:Transform a mets's doctxt.xml with the XSLT in the named file. METSPlugout.xslt_mets:Transform a mets's docmets.xml with the XSLT in the named file. # # Perl module strings # classify.could_not_find_classifier:ERROR: Could not find classifier \"%s\" download.could_not_find_download:ERROR: Could not find download module \"%s\" plugin.could_not_find_plugin:ERROR: Could not find plugin \"%s\" plugin.including_archive:including the contents of 1 ZIP/TAR archive plugin.including_archives:including the contents of %d ZIP/TAR archives plugin.kill_file:Process killed by .kill file plugin.n_considered:%d documents were considered for processing plugin.n_included:%d were processed and included in the collection plugin.n_rejected:%d were rejected plugin.n_unrecognised:%d were unrecognised plugin.no_plugin_could_process:WARNING: No plugin could process %s plugin.no_plugin_could_recognise:WARNING: No plugin could recognise %s plugin.no_plugin_could_process_this_file:no plugin could process this file plugin.no_plugin_could_recognise_this_file:no plugin could recognise this file plugin.one_considered:1 document was considered for processing plugin.one_included:1 was processed and included in the collection plugin.one_rejected:1 was rejected plugin.one_unrecognised:1 was unrecognised plugin.see_faillog:See %s for a list of unrecognised and/or rejected documents PrintUsage.default:Default PrintUsage.required:REQUIRED plugout.could_not_find_plugout:ERROR: Could not find plugout \"%s\"