• No results found

Translation Selection through Source Word Sense Disambiguation and Target Word Selection

N/A
N/A
Protected

Academic year: 2020

Share "Translation Selection through Source Word Sense Disambiguation and Target Word Selection"

Copied!
7
0
0

Loading.... (view fulltext now)

Full text

(1)

and Target Word Selection

Hyun Ah Lee yz

and Gil Chang Kim y

y

Dept.ofEECS,KoreaAdvanced InstituteofScienceandTechnology(KAIST),

373-1Kusong-DongYusong-GuTaejon305-701RepublicofKorea

z

DaumsoftInc.,Daechi-Dong946-12Kangnam-GuSeoul135-280RepublicofKorea

halee@csone.kaist.ac.kr,gckim@cs.kaist.ac.kr

Abstract

A word has many senses, and each sense can

bemapped into many target words. Therefore,

toselecttheappropriatetranslationwitha

cor-rectsense, thesenseofasourcewordshouldbe

disambiguated before selecting a target word.

Based onthisobservation,weproposeahybrid

method for translationselection that combines

disambiguationof a source word sense and

se-lectionofatarget word. Knowledgefor

transla-tion selection is extractedfrom a bilingual

dic-tionary and target language corpora. Dividing

translationselectioninto thetwosub-problems,

we can makeknowledgeacquisition

straightfor-wardandselectmoreappropriatetargetwords.

1 Introduction

In machine translation, translation selection is

aprocessthatselectsanappropriatetarget

lan-guagewordcorrespondingtoawordinasource

language. Like other problems in natural

lan-guage processing, knowledge acquisition is

cru-cial for translation selection. Therefore, many

researchers have endeavored to extract

knowl-edgefrom existingresources.

Asmassesoflanguageresourcesbecome

avail-able, statistical methods have been attempted

fortranslationselectionandshownpractical

re-sults. Someoftheapproacheshaveuseda

bilin-gualcorpusasaknowledgesourcebasedonthe

idea of Brown et al. (1990), but they are not

preferred in general since a bilingual corpus is

hardtocomeby. Thoughsomelatest

approach-eshaveexploitedwordco-occurrencethatis

ex-tracted from a monolingual corpus in a target

language(Daganand Itai,1994;Prescheretal.,

2000;KoehnandKnight,2001),those methods

oftenfailinselectingappropriatewordsbecause

theydonotconsidersenseambiguityofatarget

Inabilingualdictionary,sensesofawordare

classiedintoseveralsensedivisionsandalsoits

translationsaregrouped byeachsensedivision.

Therefore, when one looks up the translation

of a word in a dictionary, she/he ought to

re-solve the sense of a word ina source language

sentence,andthenchooseatarget wordamong

translationscorresponding to thesense. In this

paper, the fact that a word has many senses

and each sense can be mapped into many

tar-getwords(Leeetal.,1999)isreferredto asthe

`word-to-sense and sense-to-word' relationship,

basedonwhichweproposeahybridmethodfor

translationselection.

In ourmethod, translationselection is taken

as the combined problem of sense

disambigua-tionof a sourcelanguage wordand selection of

a target language word. To disambiguate the

sense of a source word, we employ both a

dic-tionary based method and a target word

co-occurrence based method. In order to select a

targetword,theco-occurrence basedmethodis

alsoused.

We introduce three measures for translation

selection: sense preference, sense probability

andwordprobability. In abilingual dictionary,

example sentences are listed for each sense

di-visionofasourceword. Thesimilaritybetween

thoseexamplesand aninputsentencecanserve

asameasureforsensedisambiguation,whichwe

deneassensepreference. In thebilingual

dic-tionary,target wordsare alsorecorded foreach

sensedivision. Since thesetof those words can

modeleachsense,wecancalculatethe

probabil-ity of the sense by applying the co-occurrence

based method to the set of words. We dene

the estimated probability as sense probability,

which is taken for the other measure of sense

(2)

prob-lationscan be calculated. We dene itasword

probability, which is a measure for word

selec-tion. Mergingsensepreference,sense

probabili-tyandwordprobability,wecomputepreference

for each translation, and then choose a target

word withthehighesttranslationpreference as

a translation.

2 Translation Selection based on

`word-to-sense and sense-to-word'

The`word-to-senseandsense-to-word'

relation-ship means that a word in a source language

couldhavemultiplesenses,eachofwhichmight

be mapped into various words in a target

lan-guage. As shown in examples below, a Korean

verb`meok-da'hasmanysenses(work,deaf,eat,

etc.). Also `kkae-da' has three senses (break,

hatch, wake), among which the break sense of

`kkae-da' is mapped into multiple words such

as `break', `smash' and `crack'. In that case, if

break

kkae-da

wake

hatch

break smash

hatch

wake_up

awake

(jeobsi-reul kkae-da)

(X) hatch a dish

(O) break a dish

(O) smash a dish

(O) crack a dish

crack

work

meok-da

eat

deaf

saw bite

cut

deaf

eat

take

….

dye

have

(jeomsim-eul meok-da)

(X) saw a lunch

(?) eat a lunch

(O) have a lunch

(O) take a lunch

`hatch'isselectedasatranslationof`kkae-da'in

a sentence `jeobsi-reul kkae-da', the translation

must be wrong because the sense of `kkae-da'

in the sentence is break rather than hatch. In

contrast, any word of the break sense of

`kkae-da' forms a well-translated sentence.

Howev-er, selecting a correct sense does not always

guarantee a correct translation. If the senseof

`meok-da' in`jeomsim-eul meok-da' is correctly

disambiguatedaseat butan inappropriate

tar-get word is selected, an improperorunnatural

translationwillbe produced like`eat a lunch'.

Therefore, in order to get a correct

transla-tion, such a target wordmust be selected that

has the right senseof a sourcewordand

form-s a proper target language sentence. Previous

approaches on translationselection usually try

totranslateasourceworddirectlyinto atarget

wordwithoutconsideringthesense. Thus,they

Target

Language

Corpus

Input

Sentence

Get

Sense

Preference

Get

Word

Probability

Sense Definition &

Example Sentence

Bilingual

Dictionary

Sense & Taget

Word Mapping

Word-level

Translation

Get

Sense

Probability

Combine

spf, sp, wp

Word-level

Translation

Get

Sense

Probability

Combine

spf, sp, wp

WordNet

Target Word

Cooccurrence

Sense

Disambiguation

Word

Selection

Figure1: Our process of translationselection

fer from the problem of knowledge acquisition

andincorrect selectionof translation.

In this paper, we propose a method for

translationselectionthatreects`word-to-sense

and sense-to-word' relationship. We dividethe

problem of translationselection into sense

dis-ambiguation of a source word and selection of

a target word. Figure 1 shows our process of

selecting translation. We introduce three

mea-suresfortranslationselection: sense preference

(spf)andsenseprobability(sp)forsense

disam-biguation, and word probability(wp) for word

selection.

Figure 2 shows a partof an

English-English-KoreanDictionary(EssEEK,1995) 1

. Asshown

inthe gure, example sentences and denition

sentences are listed for each sense of a source

word. In the gure, example sentences of the

destroysenseof`break'consist ofwords suchas

`cup',`piece',and`thread',andthoseforthe

vio-latesenseconsistofwordssuchas`promise'and

`contract', which can function asindicators for

the sense of `break'. We calculate sense

pref-erence of a word for each sense by estimating

similarity between words in an input sentence

andwordsinexampleand denitionsentences.

Each sense of a source word can be mapped

into a set of translations. Asshown in the

ex-ample, the break sense of `kkae-da' is mapped

intothesetf`break',`smash', `crack'g andsome

words in the set can replace each other in a

translated sentence like `break/smash/crack a

1

TheEnglish-English-KoreanDictionaryusedhereis

(3)

break

‚

breik

„G

v

UGO

broke, brok

"en

PG

vt

UG

1

Ow]S^P

cause (something) to come to

pieces by a blow or strain; destroy; crack; smash

UG

L bGbGbG

U

&

~ a cup

VG

~ a glass to [into] piece

VG

~ a thread

VG

~ one's arm

VG

~ a stick in two

VG

I heard the rope ~.

!" # $ %&

VG

The river broke its

bank.

'" ( )

UG

2

Ow]P

hurt; injure

UG

L

*+, -.‚/.„G0

UG

&

~ the skin

12 *+

UG

3

Ow]P

put (something) out of order; make useless by

rough handling, etc.

L 34bGG5 6.‚78-.„G0

UG

&

~ a watch

9: 7

8

VG

~ a line

; <

VG

~ the peace

=> 34

UG

4

Ow]P

fail to keep

or follow; act against (a rule, law, etc) ; violate; disobey

UG

O?@"A BCP

VG

~

one's promise

DE

VG

~ a contract

:D CF0

UG

5 ….

Figure2: A partof an EEKDictionary

dish'. Therefore, we can expect the behavior

of those words to be alike or meaningfully

re-lated in a corpus. Based on this expectation,

we compute sense probability by applying the

target word co-occurrence based method to all

words inthesame sensedivision.

Wordprobabilityrepresentshowlikelya

tar-getwordisselectedfromthesetofwordsinthe

same sensedivision. In the example

`jeomsim-eul meok-da', the sense of `meok-da' is eat,

whichismappedintothreewordsf`eat', `have',

`take'g. Among them, `have' forms a natural

phrase while `eat' makes an awkward phrase.

We could judge the naturalness from the

fac-t that `have' and `lunch' co-occurs in a corpus

more frequentlythan `eat' and `lunch'. In

oth-er words, the level of naturalness or

awkward-ness of a phrase can be captured by word

co-occurrence. Based on this observation, we use

targetwordco-occurrencetogetword

probabil-ity.

3 Calculation of each measure

3.1 Sense Preference

Given a source word s and its k-th sense s k

,

we calculate sensepreference spf(s k

) usingthe

equation (1). In the equation, SNT is a set of

allcontent wordsexceptsinaninputsentence.

DEF

s k

isasetofallcontentwords indenition

sentencesof s k

,and EX

s k

isaset ofall content

wordsinexamplesentencesofs k

,bothofwhich

areextracted fromthe dictionary.

spf(s k )= X w i 2SNT max w d 2DEF s k sim(w i ;w d ) (1) + X wi2SNT max we2EX s k sim(w i ;w e )

Sensepreferenceisobtainedbycalculating

sim-ilaritybetweenwordsinSNTand wordsin DE-(w

i

2SNT),wesum up themaximum similarity

betweenw

i

and allcluewords(i.e. w

d and w

e ).

To get similarity betweenwords (sim(w

i ;w

j )),

we use WordNet and the metric proposed in

Rigauetal. (1997).

Senses in a dictionary are generally ordered

by frequency. To reect distribution of senses

in common text, we use a weight factor (s k

)

that is inversely proportionalto the order of a

sense s k

in a dictionary. Then, we calculate

normalized sense preference to combine sense

preferenceand senseprobability.

spf

w (s

k

)=(s k ) spf(s k ) P i spf(s i ) (2) spf norm (s k )= spf w (s k ) P i spf w (s i ) (3)

3.2 Sense Probability

Sense probability represents how likely target

wordswiththesame senseco-occurwith

trans-lationsofotherwordswithintheinputsentence.

Let us suppose that the i-th word in an input

sentence is s

i

and the k-th sense of s

i is s

k

i .

Then, sense probability sp(s k

i

) is computed as

follows: n(t k iq )= X (sj;m;c)2(si) m X p=1 f(t k iq ;t jp ;c) f(t k iq )+f(t

jp ) (4) sp(s k i )=p^

sense (s k i )= P q n(t k iq ) P x P y n(t x iy ) (5)

Intheequation,(s

i

) signiesa setof

word-s that co-occur with s

i

on syntactic relations.

In an element (s

j

;m;c) of (s

i ), s

j

is a word

that co-occurs with s

i

in an input sentences, c

is a syntactic relation 2 between s i and s j , and

misthenumberoftranslationsofs

j

. Provided

that the set of translations of a sense s k

i is T

k

i

anda memberofT k

i is t

k

iq

,thefrequencyof

co-occurring t k

iq and t

jp

with a syntactic relation

c is denoted as f(t k

iq ;t

jp

;c), which is

extract-ed from target language corpora. 3

Therefore,

2

We guess 37 syntactic relations in English

sen-tences based on verb pattern information in the

dic-tionary and the results of the memory based shallow

parser(Daelemansetal.,1999): subj-verb, obj-verb,

modifier-noun, adverb-modifiee, etc.

3

(4)

n(t

iq

) in the equation (4) represents how

fre-quentlyt k

iq

co-occurswithtranslationsofs

j . By

summingupn(t k

iq

)foralltargetwordsinT k

i ,we

obtainsenseprobabilityofs k

i .

3.3 Word Probability

Word probability represents how frequently a

target word in a sense division co-occurs in a

corpus with translations of other words within

theinputsentence. Wedenotewordprobability

as wp(t k

iq

) that is a probability of selecting t k

iq

from T k

i

. Using n(t k

iq

) in the equation (4), we

calculatewordprobabilityasfollows:

wp(t k

iq )=p^

word (t k iq )= n(t k iq ) P x n(t k ix ) (6)

3.4 Translation Preference

To select a target word among all translations

of a source word, we compute translation

pref-erence (tpf) by merging sense preference, sense

probability and word probability. We simply

sum up thevalue of sensepreference and sense

probability as a measure of sense

disambigua-tion,andthenmultiplyitwithwordprobability

asfollows:

tpf(t k

iq

)=(Æspf

norm (s

k

i

)+(1 Æ)sp(s k i )) (7) wp(t k iq )=(T k i )

In the equation, (T k

i

) is a normalizing factor

forwp(t k

iq )

4

,andÆisaweightingfactorforsense

preference. We select a target word with the

highest tpfasatranslation.

When allofthen(t k

iq

) intheequation (4)are

0,we usethefollowingequation forsmoothing,

which uses onlyfrequencyofa target word.

n(t k iq )= f(t k iq ) P p f(t k ip ) (8)

si, 4, verb-obj) in the example of `jeobsi-reul

kkae-da', and then the following frequencies will be used:

f(break,plate,verb-obj), f(break,dish, verb-obj),...,

f(crack,platter, verb-obj).

4

Because wp(t k

iq

) is a probability of t k iq within T k i , wp(t k iq

)becomes1whenT k

i

hasonlyoneelement. (T k

i )

isusedtomakethemaximumvalueofwp(t k iq )=(T k i )to

1. Hence, (T k

i

) =max

j wp(t

k

ij

). Using (T k

i

), we can

preventdiscountingtranslationpreferenceofaword,the

4 Evaluation

WeevaluatedourmethodonEnglish-to-Korean

translation. EssEEK(1995)wasusedasa

bilin-gual dictionary. We converted this

English-EnglishKoreandictionaryintoamachine

read-able form, which has about 43,000 entries,

34,000 unique words and 80,000 senses. From

the dictionary, we extracted sense

denition-s, example sentences and translations for each

sense. Co-occurrence of Korean was

extract-edfrom YonseiCorpus, KAISTcorpusand

Se-jongCorpususingthemethodproposedinYoon

(2001). Thenumberofextractedco-occurrence

isabout600,000.

To exclude any kind of human intervention

duringknowledgeextractionandevaluation,we

evaluatedourmethodsimilarlywithKoehnand

Knight (2001). English-Korean aligned

sen-tenceswerecollectedfromnovels,textbooksfor

highschoolstudentsandsentencesina

Korean-to-English bilingual dictionary. Among those

sentences,weextracted sentence pairsinwhich

words in the English sentence satisfy the

fol-lowingcondition: ifthepairedKoreansentence

contains a word that is dened in the

dictio-nary as a translation of the English word. We

randomlychose 945 sentences as an evaluation

set, in which 3,081 words in English sentences

satisfy the condition. Among them, 1,653 are

nouns, 990 are verbs, 322 are adjectives, and

116 areadverbs.

We used Brill's tagger (Brill, 1995) and

Memory-Based Shallow Parser (Daelemans et

al.,1999) to analyze English sentences. To

an-alyzeKorean sentences, we used a POS tagger

forKorean(Kim andKim,1996).

First, we evaluated the accuracies of sense

preference (spf) and senseprobability (sp). If

any target word of the sense with the highest

value is included in an alignedtarget language

sentence,weregarded itascorrect result.

The accuracy of sensepreference isshownin

Table 1. In the table, a w/o DEF row shows

theresult produced by removingthe DEF clue

in the equation (1), and a w/o ORDER row is

the result produced without the weight factor

(s k

)oftheequation(2). Asshowninthetable,

theresultwiththeweight factorand withouta

sensedenitionsentenceisbestforallcases. We

(5)

noun verb adj. adv. all

w/oORDER w/oDEF 59.23% 42.93% 45.03% 34.48% 52.47%

withDEF 57.65% 40.20% 49.07% 33.62% 51.14%

withORDER 5

w/oDEF 66.30% 48.48% 59.32% 44.83% 59.94%

withDEF 65.76% 45.56% 56.52% 42.24% 58.41%

Table 2: Accuracy of senseprobability(sp)

noun verb adj. adv. all

withallcoocword 52.49% 30.49% 38.94% 59.35% 46.52%

withcasecoocword 49.62% 31.33% 40.12% 60.98% 45.41%

Table3: Accuracy of translationselection

noun verb adj. adv. all

randomselection 11.11% 3.92% 7.41% 11.12% 6.77%

1sttranslationof1stsense 12.89% 12.12% 7.45% 2.59% 11.86%

mostfrequent 46.34% 27.58% 31.99% 37.93% 38.68%

spf 53.72% 38.18% 39.75% 44.83% 46.98%

with sp 42.88% 19.28% 26.25% 39.02% 35.04%

all wp 51.66% 31.41% 32.92% 46.55% 43.10%

cooc spwp 51.97% 31.92% 33.54% 50.00% 43.62%

word spfwp 55.23% 40.00% 41.30% 50.00% 50.17%

(spf+sp)wp 6

54.99% 34.55% 36.34% 50.86% 46.42%

with sp 38.90% 20.20% 25.96% 37.40% 33.05%

case wp 48.70% 34.04% 35.40% 45.69% 42.58%

cooc spwp 49.12% 33.74% 36.34% 50.86% 43.01%

word spfwp 53.60% 42.22% 42.86% 50.86% 49.93%

(spf+sp)wp 6

51.18% 36.46% 38.20% 53.45% 45.28%

Table2. Intheequation(4),weproposedtouse

syntactic relations to get sense probability. In

thetable,theresultobtainedbyusingwordson

asyntacticrelation(withcasecoocword)is

com-paredto thatobtainedbyusingall words

with-in a sentence (with all cooc word). As shown,

theaccuracy fornounsis higherwhenusingall

words in a sentence, whereas the accuracy for

others is higher when considering syntactic

re-lations.

Table 3 shows the result of translation

se-lection. 7

We set three types of baseline -

ran-dom selection, most frequent, 1st translation of

1st sense 8

. We conducted the experiment

al-5

Inthe case of (1st sense)=1.5, (2nd sense)=1.3,

(3rdsense)=1.15,and(remainder)=1

6

InthecaseofÆ=0.6,whichshowsthebestresultin

theexperiment.

7

Togetspf,weuseonlythebestcombinationofclues

-withtheweightfactorandwithoutsensedenition

sen-tences.

8

Result of selecting translation that is recorded at

tering the combination of spf, sp and wp. An

spfrowshowstheaccuracyofselectingtherst

translationofthesensewiththehighestspf,and

an sp row shows the accuracy of selecting the

rst translation of the sense with the highest

sp. A wp rowshows the accuracy of usingonly

wordprobability. In otherwords,itisobtained

through assuming all translations of a source

word to have the same sense. Therefore, the

result in a wp row can be regarded as a result

produced by the method that uses only target

wordco-occurrence. Anspwprowistheresult

obtained withoutspf

norm

, and an spfwp row

is the result obtained without sp in the

equa-tion (8). An (spf+sp)wp row is the result of

combiningall measures.

For the best case, the accuracy for nouns is

55.23%, that for verbs is 42.22% and that for

adjectives is42.86%. Althoughwe useasimple

methodforsensedisambiguation,ourresultsof

(6)

5 Discussion

We could summarize the advantage of our

method asfollows:

reductionof theproblem complexity

simplifyingknowledgeacquisition

selectionof more appropriatetranslation

useof amono-bilingualdictionary

integration of syntactic relations

guarantee ofrobustresult

The gure below shows theaverage numberof

senses and translations for an English word in

EssEEK (1995). The left half of the graph is

forall English words inthe dictionary,and the

right half is for the words that are marked as

frequent or signicant words in the dictionary.

Ontheaverage,a frequentlyappearingEnglish

word has 2.82 senses, each sense has 1.96

Ko-rean translations,and consequentlyan English

word has 5.55 Korean translations. Moreover,

inourevaluationset,awordhas6.86sensesand

awordhas14.78translations. Mostapproaches

on translationselection have tried to translate

a sourceworddirectly into a target word, thus

the complexity of their method grows

propor-tional to the number of translations per word.

In contrast, the complexity of our method is

proportional to only the number of senses per

word or the number of translations per sense.

Therefore, we could reduce the complexity of

translationselection.

Whenthedegreeofambiguityofasource

lan-guage is n and that of a target language is m,

0

1

2

3

4

5

6

7

all

noun

verb

adj

adv

all

noun

verb

adj

adv

all word

frequent word

sense per word

translation per sense

translation per word

Figure 3: The average number of senses and

translationsforan English word

Therefore,knowledgeacquisitionfortranslation

ismorecomplicatedthanotherproblemsof

nat-ural language processing. Although the

com-plexity of knowledge acquisition of the

meth-ods based on a target language corpus is

on-ly m, ignorance of a source language resultsin

selecting inappropriate target words. In

con-trast,bysplittingtranslationselection into two

sub-problems,ourmethodusesonlyexisting

re-sources-abilingualdictionaryandatarget

lan-guagemonolingualcorpus. Therefore, we could

alleviate the complexity of knowledge

acquisi-tionto around n+m.

Asshownintheprevioussection,ourmethod

could select more appropriate translationthan

themethodbasedontargetwordco-occurrence,

although we used coarse knowledge for sense

disambiguation. Itisbecause theco-occurrence

based method does notconsider sense

ambigu-ity of a target word. Consider the translation

oftheEnglishphrase`haveacar'. Aword`car'

is translated into a Korean word `cha', which

has many meanings including car and tea. A

word`have'hasmanysenses-posses,eat,drink,

etc.,amongwhichthedrinksenseismapped

in-to a Koreanverb `masi-da'. Because of thetea

senseof`cha',thefrequencyofco-occuring`cha'

and`masi-da'isdominantinacorpusoverthat

of all other translations of `car' and `have'. In

that case, the method that uses only word

co-occurrence (wp in our experiment) translated

`have acar' into an incorrectsentence `cha-reul

masi-da' that means `have a tea'. In contrast,

ourmethodproducedacorrecttranslation

`cha-reulkaji-da',becausewedisambiguatethesense

of`have'aspossesbyemployingsensepreference

before usingco-occurrence.

In the experiment, the combination spfwp

getshigheraccuracythan(spf+sp)wpformost

cases. Itisduetothelowaccuracyofsp, which

isalsocausedbytheambiguityofatargetword

as shown in the example of `have a car'.

Nev-ertheless,we do notthinkit meanssense

prob-abilityis useless. The accuracies of spwp are

almost better thanthose of wp, thereforesense

probabilitycan workasacountermeasurewhen

noknowledgeofasourcelanguageexistsfor

dis-ambiguating thesenseofa word.

(7)

dic-denitions in a source language and syntactic

patterns for the predicate. While previous

re-search on sense disambiguation that exploits

those kinds of information has shown reliable

results, inour experiment,theuseof sense

def-initionslowerstheaccuracy. Itislikelybecause

weusedsensedenitionstooprimitively.

There-fore, we expect that we could increase the

ac-curacybyproperlyutilizingadditional

informa-tion inthedictionary.

Many studies on translation selection have

concerned with translation of only nouns. In

particular, some of them use all co-occurring

wordswithinasentence,andothersuseafew

re-strictedtypesofsyntacticrelations. Evenmore,

eachsyntacticrelationisutilizedindependently.

In this paper, we proposed a statistical model

thatintegratesvarioussyntacticrelations,with

which the accuracies for verbs, adjectives, and

adverbs increase. Nevertheless, sincethe

accu-racy for nouns is higher when using all words

in a sentence, it seems that syntactic relations

maybe dierentlyusedforeach case.

The other advantage of our method is

ro-bustness. Since our method disambiguates the

sense of a source language word using reliable

knowledgeinadictionary,wecouldavoid

select-ingor generatingatarget wordthe meaningof

which is denitely improperin given sentences

like `hatch a dish' for 'jeobsi-reul kkae-da' and

`cha-reulmasi-da' for`havea car'.

6 Conclusion

Inthispaper, weproposeda hybridmethodfor

translation selection. By dividing translation

selection into sense disambiguationof a source

word and selection of a target word, we could

simplifyknowledge acquisition and select more

appropriatetranslation.

Acknowledgement

We are grateful to Sabine Buchholz, Bertjan

BusseratTilburgUniversityandprofessorW

al-terDaelemansat UniversityofAntwerpforthe

aid to use the Memory-Based Shallow Parser

(MBSP).

References

Eric Brill. 1995. Transformation-based

error-Computational Linguistics,21(4).

Peter F. Brown, John Cocke, Vincent Della

Pietra, Stephen Della Pietra, Frederick

Je-linek, John D. Laerty, Robert L. Mercer,

and Paul S. Roossin. 1990. A statistical

ap-proachtomachinetranslation.

Computation-al Linguistics,16(2).

Walter Daelemans, Sabine Buchholz, and Jorn

Veenstra. 1999. Memory-basedshallow

pars-ing. In Proceedings of the 3rd

Internation-al Workshop on Computational Natural

Lan-guageLearning (CoNLL-'99).

IdoDaganandAlonItai. 1994. Wordsense

dis-ambiguation using a second language

mono-ligual corpus. Computational Linguistics,

20(4).

EssEEK,1995. EssenceEnglish-EnglishKorean

Dictionary. MinJungSeoRim.

Jae-Hoon Kim and Gil Chang Kim. 1996.

Fuzzy network model for part-of-speech

tag-gingundersmalltrainingdata. Natural

Lan-guageEngineering,2(2).

Philipp Koehn and Kevin Knight. 2001.

Knowledge sources forword-level translation

models. In Proceedings of Empirical

Method-s in Natural Language Processing conference

(EMNLP-2001).

Hyun Ah Lee, Jong C. Park, and Gil Chang

Kim. 1999. Lexical selection with a target

languagemonolingualcorpusandanmrd. In

Proceedings of the 8th International

Confer-enceonTheoreticalandMethodologicalIssues

in MachineTranslation (TMI-'99).

Detlef Prescher, Stefan Riezler, and Mats

Rooth. 2000. Using a probabilistic

class-based lexicon for lexical ambiguity

resolu-tion. In Proceedings of the 18th

Internation-al Conference on Computational Linguistics

(COLING-2000).

German Rigau, Jordi Atserias, and Eneko

A-girre. 1997. Combining unsupervised

lexi-calknowledgemethodsforwordsense

disam-biguation. InProceedings of the 35th Annual

MeetingoftheAssociationforComputational

Linguistics (ACL-'97).

JuntaeYoon. 2001. EÆcientdependency

analy-sisforkoreansentencesbasedonlexical

asso-ciation and multi-layered chunking. Literary

References

Related documents

i.) HRA termed as a vehicle for improvement of management as well as measurement of HR. ii.) The measurement and reporting of HRA in Indian Industry are in growing trend. iii.)

At the same time, it is important to mention that apart from making India food sufficient in terms of food production, for sustainable availability of food for all, the

Khrennikov, Representations of the Weyl group in spaces of square integrable functions with respect to p-adic valued Gaussian distributions, Journal of Physics.. Ramaroson, A

A Monthly Double-Blind Peer Reviewed Refereed Open Access International e-Journal - Included in the International Serial Directories.. International Research Journal of Natural

Several investigators have used a constant intravenous infusion of levodopa to control the “on-off” phenomenon in Parkinson’s disease patient^.^ Recently continu-

National Centers of Excellence in Women’s Health: 1 Indiana University School of Medicine, Indianapolis, Indiana; 2 University of California, Los Angeles, Los Angeles, California;

Independent of the test applied, no standard has been defined for the degree of knee flexion needed to eliminate the effect of the musculus gastrocnemius on ADF.. Most

In recent years, the electrical power generation from renewable energy sources, such as wind, is increasingly attraction interest because of environmen- tal problem and shortage