source: other-projects/nightly-tasks/diffcol/trunk/model-collect/Customization/archives/HASH8bbe.dir/doc.xml@ 37422

Last change on this file since 37422 was 37422, checked in by anupama, 14 months ago

AUTOCOMMIT by gen-model-colls.sh script. Message: Clean rebuild of model collections 1/2. Clearing out deprecated archives and index.

File size: 5.6 KB
Line 
1<?xml version="1.0" encoding="utf-8" standalone="no"?>
2<!DOCTYPE Archive SYSTEM "http://greenstone.org/dtd/Archive/1.0/Archive.dtd">
3<Archive>
4<Section>
5 <Description>
6 <Metadata name="gsdldoctype">indexed_doc</Metadata>
7 <Metadata name="Language">en</Metadata>
8 <Metadata name="Encoding">utf8</Metadata>
9 <Metadata name="Title">Bronwyn; page: 1 of 1 1 Using language models for generic entity extraction</Metadata>
10 <Metadata name="gsdlsourcefilename">import/langmodl.ps</Metadata>
11 <Metadata name="gsdlsourcefilerenamemethod">url</Metadata>
12 <Metadata name="gsdlconvertedfilename">tmp/1678155669/langmodl.text</Metadata>
13 <Metadata name="OrigSource">langmodl.text</Metadata>
14 <Metadata name="Source">langmodl.ps</Metadata>
15 <Metadata name="SourceFile">langmodl.ps</Metadata>
16 <Metadata name="Plugin">PostScriptPlugin</Metadata>
17 <Metadata name="FileSize">16751</Metadata>
18 <Metadata name="FilenameRoot">langmodl</Metadata>
19 <Metadata name="FileFormat">PS</Metadata>
20 <Metadata name="srcicon">_iconps_</Metadata>
21 <Metadata name="srclink_file">doc.ps</Metadata>
22 <Metadata name="srclinkFile">doc.ps</Metadata>
23 <Metadata name="Identifier">HASH8bbe6da0374b413b1b355c</Metadata>
24 <Metadata name="lastmodified">1678155657</Metadata>
25 <Metadata name="lastmodifieddate">20230307</Metadata>
26 <Metadata name="oailastmodified">1678155670</Metadata>
27 <Metadata name="oailastmodifieddate">20230307</Metadata>
28 <Metadata name="assocfilepath">HASH8bbe.dir</Metadata>
29 <Metadata name="gsdlassocfile">doc.ps:application/postscript:</Metadata>
30 </Description>
31 <Content>&lt;pre&gt;
32Bronwyn; page: 1 of 1 1 Using language models for generic entity extraction
33Ian H. Witten, Zane Bray, Malika Mahoui, W.J. Teahan Computer ScienceUniversity
34of WaikatoHamilton, New [email protected] AbstractThis paper describes
35the use of statisticallanguage modeling techniques, such as arecommonly used
36for text compression, to extractmeaningful, low-level, information about
37thelocation of semantic tokens, or \\322entities,\\323 in text.We begin by
38marking up several different tokentypes in training documents\\321for example,people\\325s
39names, dates and time periods, phonenumbers, and sums of money. We form alanguage
40model for each token type and examinehow accurately it identifies new tokens.
41We thenapply a search algorithm to insert tokenboundaries in a way that maximizes
42compressionof the entire test document. The technique can beapplied to hierarchically-defined
43tokens, leadingto a kind of \\322soft parsing\\323 that will, we believe,be
44able to identify structured items such asreferences and tables in html or
45plain text, basedon nothing more than a few marked-up examplesin training
46documents. 1. INTRODUCTIONText mining is about looking for patterns in
47text, and maybe defined as the process of analyzing text to extractinformation
48that is useful for particular purposes.Compared with the kind of data stored
49in databases, textis unstructured, amorphous, and difficult to deal with.Nevertheless,
50in modern Western culture, text is the mostcommon vehicle for the formal
51exchange of information.The motivation for trying to extract information
52from it iscompelling\\321even if success is only partial.Text mining is possible
53because you do not have tounderstand text in order to extract useful information
54fromit. Here are four examples. First, if only names could beidentified,
55links could be inserted automatically to otherplaces that mention the same
56name\\321links that are\\322dynamically evaluated\\323 by calling upon a search
57engineto bind them at click time. Second, actions can beassociated with different
58types of data, using eitherexplicit programming or programming-by-demonstrationtechniques.
59A day/time specification appearing anywherewithin one\\325s email could be
60associated with diary actionssuch as updating a personal organizer or creating
61anautomatic reminder, and each mention of a day/time in thetext could raise
62a popup menu of calendar-based actions.Third, text could be mined for data
63in tabular format,allowing databases to be created from formatted tablessuch
64as stock-market information on Web pages. Fourth,an agent could monitor incoming
65newswire stories forcompany names and collect documents that mentionthem\\321an
66automated press clipping service.In all these examples, the key problem is
67to recognizedifferent types of target fragments, which we will calltokens
68or \\322entities\\323. This is really a kind of languagerecognition problem:
69we have a text made up of differentsublanguages {for personal names, company
70names, dates,table entries, and so on} and seek to determine whichparts are
71expressed in which language.The information extraction research community
72{of whichwe were, until recently, unaware} has studied these tasksand reported
73results at annual Message UnderstandingConferences {MUC}. For example, \\322named
74entities\\323 aredefined as proper names and quantities of interest,including
75personal, organization, and location names, aswell as dates, times, percentages,
76and monetary amounts{Chinchor, 1999}.The standard approach to this problem
77is manual:tokenizers and grammars are hand-designed for theparticular data
78being extracted. Looking at currentcommercial state-of-the-art text mining
79software, forexample, IBM\\325s Intelligent Miner for Text {Tkach, 1997}uses
80specific recognition modules carefully programmedfor the different data types,
81while Apple\\325s data detectors{Nardi et al., 1998} uses language grammars.
82The TextTokenization Tool of Grover et al. {1999} is anotherexample, and
83a demonstration version is available on theWeb. The challenge for machine
84learning is to use
85&lt;/pre&gt;</Content>
86</Section>
87</Archive>
Note: See TracBrowser for help on using the repository browser.