• No results found

A Parsing Algorithm that Extends Phrases

N/A
N/A
Protected

Academic year: 2020

Share "A Parsing Algorithm that Extends Phrases"

Copied!
10
0
0

Loading.... (view fulltext now)

Full text

(1)

A Parsing Algorithm That Extends Phrases

D a n i e l C h e s t e r

D e p a r t m e n t o f C o m p u t e r S c i e n c e s T h e U n i v e r s i t y of T e x a s a t A u s t i n

A u s t i n , T e x a s 7 8 7 1 2

It is desirable for a parser to be able to extend a phrase even after it has been combined into a larger syntactic unit. This paper presents an algorithm that does this in two ways, one dealing with "right extension" and the other with "left recursion". A brief comparison with other parsing algorithms shows it to be related to the l e f t - corner parsing algorithm, but it is more flexible in the order that it permits phrases to be combined. It has many o f the properties o f the s e n t e n c e analyzers o f Marcus and Riesbeck, but is independent o f the language theories on which those programs are based.

1. Introduction

T o a n a l y z e a s e n t e n c e of a natural language, a c o m p u t e r p r o g r a m recognizes the phrases within the sentence, builds data structures, such as c o n c e p t u a l r e p r e s e n t a t i o n s , f o r e a c h of t h e m a n d c o m b i n e s those s t r u c t u r e s into o n e t h a t c o r r e s p o n d s to the entire sentence. T h e algorithm which recognizes the p h r a s e s and i n v o k e s the s t r u c t u r e - b u i l d i n g p r o c e - dures is the parsing algorithm i m p l e m e n t e d b y the p r o g r a m . The parsing algorithm is c o m b i n e d with a set of p r o c e d u r e s f o r deciding b e t w e e n alternative actions and f o r building the d a t a s t r u c t u r e s . Since it is o r g a n i z e d a r o u n d p h r a s e s , it is p r i m a r i l y c o n - c e r n e d with syntax, while the p r o c e d u r e s it calls deal with n o n - s y n t a c t i c parts of the analysis. W h e n the p r o g r a m is run, there m a y be a c o m p l e x inter- play b e t w e e n the code s e g m e n t s t h a t handle syntax and those that handle semantics and pragmatics, b u t the p r o g r a m o r g a n i z a t i o n can still be a b s t r a c t e d into a (syntactic) parsing algorithm and a set of p r o c e - dures that are called to a u g m e n t that algorithm.

By taking the view t h a t the p a r s i n g a l g o r i t h m r e c o g n i z e s the p h r a s e s in a s e n t e n c e , t h a t is, the c o m p o n e n t s of its s u r f a c e s t r u c t u r e a n d h o w t h e y can be d e c o m p o s e d , it suffices to specify the syntax of a natural language, at least a p p r o x i m a t e l y , with a c o n t e x t - f r e e p h r a s e structure g r a m m a r , the rules of which serve as p h r a s e d e c o m p o s i t i o n rules. Al- t h o u g h linguists h a v e d e v e l o p e d m o r e e l a b o r a t e g r a m m a r s for this p u r p o s e , m o s t c o m p u t e r p r o g r a m s for s e n t e n c e analysis, e.g., H e i d o r n ( 1 9 7 2 ) , Wino-

grad (1972) a n d W o o d s ( 1 9 7 0 ) , specify the syntax with such a g r a m m a r , or s o m e t h i n g equivalent, and t h e n a u g m e n t t h a t g r a m m a r with p r o c e d u r e s a n d data structures to handle the n o n - c o n t e x t - f r e e c o m - p o n e n t s of the language. T h e n o t i o n of p a r s i n g algorithm is t h e r e f o r e restricted in this p a p e r to an algorithm that recognizes p h r a s e s in a c c o r d a n c e with a c o n t e x t - f r e e p h r a s e structure g r a m m a r .

Since the parsing algorithm of a s e n t e n c e analysis p r o g r a m d e t e r m i n e s w h e n d a t a structures get c o m - bined, it seems r e a s o n a b l e to expect that the actions of the parser should reflect the actions on the d a t a structures. In particular, the c o m b i n a t i o n of p h r a s e s into larger phrases can be e x p e c t e d to coincide with the c o m b i n a t i o n of c o r r e s p o n d i n g d a t a s t r u c t u r e s into larger data structures. This h a p p e n s naturally w h e n the c o m p u t e r p r o g r a m is such that it calls the p r o c e d u r e s f o r c o m b i n i n g d a t a s t r u c t u r e s at the same time the parsing algorithm indicates that the c o r r e s p o n d i n g phrases should be combined.

1.1 Other Parsing Algorithms

A c c o r d i n g to o n e classification of parsing algor- ithms ( A h o and U l l m a n 1972), m o s t analysis p r o - grams are b a s e d on algorithms t h a t are either " t o p - d o w n " , " b o t t o m - u p " or " l e f t - c o r n e r " , t h o u g h ac- cording to a r e c e n t study b y G r i s h m a n ( 1 9 7 5 ) , the t o p - d o w n and b o t t o m - u p a p p r o a c h e s are dominant. T h e principle of top-down parsing is that the rules of the controlling g r a m m a r are used to g e n e r a t e a sen- tence that m a t c h e s the o n e being analyzed. A seri-

Copyright 1980 by the Association for Computational Linguistics. Permission to copy without fee all or part of this material is granted provided that the copies are not made for direct commercial advantage and the Journal reference and this copyright notice are included on the first page. To copy otherwise, or to republish, requires a fee and/or specific permission.

0 3 6 2 - 6 1 3 X / 8 0 / 0 2 0 0 8 7 - 1 0 5 0 1 . 0 0

(2)

ous problem with this approach, if computed phrases are supposed to correspond to natural phrases in a sentence, is that the parser cannot handle left- branching phrases. But such phrases occur in Eng- lish, Japanese, Turkish and other natural languages (Chomsky 1965, Kimball 1973, Lyons 1970).

The principle of bottom-up parsing is that a se- quence of phrases whose types match the right-hand side of a grammar rule is reduced to a phrase of the type on the left-hand side of the rule. None of the matching is done until all the phrases are present; this can be ensured by matching the phrase types in the right-to-left order shown in the grammar rule. The difficulty with this approach is that the analysis of the first part of a sentence has no influence on the analysis of latter parts until the results of the analyses are finally combined. Efforts to overcome this difficulty lead naturally to the third approach, left-corner parsing.

Left-corner parsing, like bottom-up parsing, re- duces phrases whose types match the right-hand side of a grammar rule to a phrase of the type on the - ~ left-hand side of the rule; the difference is that the '< ~,types listed in the right-hand side of the rule are /j0"x~matched from left to right for left-corner parsing ~. ~ instead of from right to left. This technique gets its ~ name from the fact that the first phrase found cor- ~ responds to the left-most symbol of the right-hand b'~side of the grammar rule, and this symbol has been called the left corner of the rule. (When a grammar rule is drawn graphically with its left-hand side as the parent node and the symbols of the right-hand side as the daughters, forming a triangle, the left- most symbol is the left corner of the triangle.) Once the left-corner phrase has been found, the grammar rule can be used to predict what kind of phrase will come next. This is how the analysis of the first part of a sentence can influence the analysis of later parts.

Most programs based on augmented transition networks employ a top-down parser to which regis- ters and structure building routines have been add- ed, e.g., Kaplan (1972) and Wanner and Maratsos (1978). It is important to note, however, that the concept of augmented transition networks is a par- ticular way to represent linguistic knowledge; it does not require that the program using the networks operate in top-down fashion. In an early paper by Woods (1970), alternative algorithms that can be used with augmented transition networks are dis- cussed, including the bottom-up and Earley algor- ithms.

The procedure-based programs of Winograd (1972) and Novak (1976) are basically top-down parsers, too. The NLP program of Heidorn (1972) employs an augmented phrase structure grammar to

combine phrases in a basically bottom-up fashion. Likewise, PARRY, the program written by Colby (Parkison, Colby and Faught 1977), uses a kind of bottom-up method to drive a computer model of a paranoid.

1.2 A N e w Parsing Algorithm

This paper presents a parsing algorithm that al- lows data structures to be combined as soon as pos-

sible. The algorithm permits a structure A to be ~ . ~ combined with a structure B to form a structure C,

and then to enlarge B to form a new structure B , ~ _ . ~ This new structure is to be formed in such a wayA~L~A~ ~ that C is now composed of A and B' instead of A l ~ X O ~ y and B. The algorithm permits these actions on data

structures because it permits similar actions on phrases, namely, phrases are combined with other phrases and afterward are extended to encompass more words in the sentence being analyzed. This behavior of combining phrases before all of their components have been found is called closure b y Kimball (1973). It is desirable because it permits the corresponding data structures to be combined and to influence the construction of other data structures sooner than they otherwise could.

In the next section of this paper the desired be- havior for combining phrases is discussed in more detail to show the two kinds of actions that are re- quired. Then the algorithm itself is explained and its operation illustrated by examples, with some details of an experimental implementation being given, also. Finally, this algorithm is compared to those used in the sentence analysis programs of Marcus and Riesbeck, and some concluding remarks are made.

2. Phrase E x t e n s i o n

A parser that extends phrases combines phrases before it has found all of their components. When parsing the sentence

(1) This is the cat that caught the rat that stole the cheese.

it combines the first "this is the cat", in phrase, then extends

four words into the phrase which "the cat" is a noun that noun phrase to "the cat that caught the rat", in which "the rat" is a noun phrase, and finally extends that noun phrase to "the rat that stole the cheese." This is apparently how people parse that sentence, because, as Chomsky (1965) noted, it is natural to break up the sentence into the fragments

(3)

Daniel Chester A Parsing Algorithm That Extends Phrases

( b y adding i n t o n a t i o n , p a u s e s or c o m m a s ) r a t h e r t h a n at the b o u n d a r i e s of the m a j o r n o u n phrases:

this is

the cat t h a t caught the rat that stole the cheese

Likewise, w h e n the parser parses the s e n t e n c e

(2) T h e rat stole the cheese in the p a n t r y b y the bread.

it f o r m s the phrase " t h e rat stole the c h e e s e " , t h e n e x t e n d s the n o u n p h r a s e " t h e c h e e s e " to " t h e c h e e s e in the p a n t r y " and e x t e n d s that p h r a s e to "the cheese in the p a n t r y b y the b r e a d " .

These two e x a m p l e s show the t w o kinds of ac- tions n e e d e d b y a p a r s e r in o r d e r to exhibit the b e - h a v i o r called closure. E a c h of these a c t i o n s will n o w be described in m o r e detail, using the following g r a m m a r , G 1:

S - > N P V P

N P - > P r o

N P - > NPI

N P - > NPI R e l P r o V P

V P - > V N P

P r o - > t h i s

NPI - > D e t N

NPI - > NPI P P

R e l P r o - > t h a t

V - > is

V - > c a u g h t

V - > s t o l e

D e t - > t h e

N - > c a t

N - > r a t

N - > c h e e s e

N - > p a n t r y

N - > b r e a d

P P - > P r e p N P

P r e p - > i n

P r e p - > b y

2.1 R i g h t E x t e n s i o n

T h e first kind of action n e e d e d b y the parser is to add m o r e c o m p o n e n t s to a p h r a s e according to a rule w h o s e r i g h t - h a n d side e x t e n d s the r i g h t - h a n d side of a n o t h e r rule. This is illustrated b y s e n t e n c e 1. With g r a m m a r G 1 , the p h r a s e "this is the c a t "

has the p h r a s e structure

s

N P V P

i

t \

P r o V N P

l I I

t h i s is NPI

t \

D e t N

l I

t h e c a t

In o r d e r to e x t e n d the NP p h r a s e " t h e c a t " , the s u b s t r u c t u r e

NP

I

NPI

I \

D e t N

I I

t h e c a t

m u s t be c h a n g e d to the s u b s t r u c t u r e

N P

'

N P V P

i \ i ,

D e t N t h a t V

I I I

t h e c a t c a u g h t

NP

I

NPI

i \

D e t N

l I

t h e r a t

T h e p a r s e r , in e f f e c t , m u s t e x t e n d the N P p h r a s e a c c o r d i n g to the rule N P - - > NP1 R e l P r o VP, whose r i g h t - h a n d side e x t e n d s the r i g h t - h a n d side of the rule N P - - > NP1.

T h e r e is a simple w a y to indicate in a g r a m m a r w h e n a p h r a s e can be e x t e n d e d in this way: w h e n t w o rules h a v e the s a m e l e f t - h a n d side, a n d the r i g h t - h a n d side of the first rule is an initial s e g m e n t of the r i g h t - h a n d side of the second rule, a special m a r k c a n be placed a f t e r t h a t initial s e g m e n t in the second rule and t h e n the first rule can be discarded. The special m a r k c a n be i n t e r p r e t e d as saying t h a t the rest of the r i g h t - h a n d side of the rule is o p t i o n - al. Using * as the special m a r k , the rule N P - - > N P 1 * R e l P r o VP c a n r e p l a c e the rules NP - - > NP1 a n d N P - - > NP1 R e l P r o V P in the g r a m m a r G1.

(4)

O u r a l g o r i t h m t h e r e f o r e requires a modified grammar. F o r the general case, a g r a m m a r is m o d i - fied b y e x e c u t i o n of the following step:

1) F o r each rule of the f o r m X - - > Y1 ... Yk, w h e n e v e r there is a n o t h e r rule of the f o r m X - - > Y1 ... Yk Y k + l ... Yn, replace this o t h e r rule b y X - - > Y1 ... Yk * Y k + l ... Yn, p r o v i d e d Y k + 1 is not *

W h e n no m o r e rules c a n be p r o d u c e d b y the a b o v e step, the following step is executed:

2) D e l e t e e a c h rule of the f o r m X - - > Y1 ... Yk f o r which there is a n o t h e r rule of the f o r m X - - > Y1 ... Yk Y k + l ... Yn.

T h e m a r k * tells the p a r s e r to c o m b i n e the p h r a s e s t h a t c o r r e s p o n d to the s y m b o l s p r e c e d i n g * as if the rule e n d e d there. A f t e r doing that, the p a r s e r tries to e x t e n d the c o m b i n a t i o n b y looking f o r p h r a s e s to c o r r e s p o n d to the s y m b o l s following *

2.2 Left Recursion

T h e s e c o n d kind of action n e e d e d b y the p a r s e r is to e x t e n d a p h r a s e a c c o r d i n g to a left-recursive rule. A l e f t - r e c u r s i v e rule is a rule of the f o r m X - - > X Y1 ... Yn. T h e action b a s e d o n a left- r e c u r s i v e rule is illustrated b y s e n t e n c e 2 a b o v e . T h e p h r a s e " t h e rat stole the c h e e s e " has the p h r a s e structure

s

N P V P

NPI V N P

f \ i i

D e t N s t o l e N P I

I I i \

t h e r a t D e t N

I I

t h e c h e e s e

In o r d e r to e x t e n d the p h r a s e " t h e c h e e s e " , the sub- structure

N P I

i \

D e t N

I I

t h e c h e e s e

m u s t be c h a n g e d to the s u b s t r u c t u r e

NPI ,

NPI P P

l \ i

\

D e t N P r e p

I I I

t h e c h e e s e in

N P

i

N P I

I \ D e t

I

t h e N

i p a n t r y

T h e p a r s e r , in effect, m u s t e x t e n d the NP1 p h r a s e a c c o r d i n g to the l e f t - r e c u r s i v e rule N P 1 - - > N P 1 PP.

In the general case, a f t e r finding a p h r a s e and c o m b i n i n g it with others, the p a r s e r m u s t l o o k f o r m o r e p h r a s e s to c o m b i n e with it a c c o r d i n g to s o m e l e f t - r e c u r s i v e rule in the g r a m m a r . L e f t - r e c u r s i v e rules, t h e r e f o r e , play a special role in this algorithm.

It might be t h o u g h t t h a t l e f t - r e c u r s i v e rules are the only m e c h a n i s m n e e d e d to e x t e n d phrases. This is n o t t r u e b e c a u s e p h r a s e s e x t e n d e d in this w a y h a v e a d i f f e r e n t p r o p e r t y f r o m those e x t e n d e d b y right e x t e n s i o n . In p a r t i c u l a r , l e f t - r e c u r s i v e rules c a n b e a p p l i e d s e v e r a l t i m e s to e x t e n d t h e s a m e phrase, which is n o t always the desired b e h a v i o r f o r a given language p h e n o m e n o n . F o r instance, a n y n u m b e r of P P p h r a s e s c a n f o l l o w an N P 1 p h r a s e without c o n n e c t i n g w o r d s or p u n c t u a t i o n b e c a u s e of the l e f t - r e c u r s i v e rule N P 1 - - > N P 1 PP. If, o n the o t h e r hand, N P - - > N P R e l P r o V P were used in- stead of N P - - > NP1 R e l P r o V P , a n y n u m b e r of relative clauses ( R e l P r o a n d VP p h r a s e pairs) could follow an N P phrase, which is generally u n g r a m m a t - ical in English unless the clauses are s e p a r a t e d b y c o m m a s or c o n n e c t i v e w o r d s like " a n d " a n d " o r " .

3. Details of t h e Parsing A l g o r i t h m

(5)

Daniel Chester A Parsing Algorithm That Extends Phrases

3.1 P h r a s e S t a t u s E l e m e n t s

In order to c o m b i n e and e x t e n d p h r a s e s properly, a p a r s e r m u s t k e e p t r a c k of the status of e a c h p h r a s e ; in particular, it m u s t n o t e w h a t kind of phrase will be f o r m e d , w h a t c o m p o n e n t p h r a s e s are still n e e d e d to c o m p l e t e the phrase, a n d w h e t h e r the phrase is a n e w o n e or an extension. This i n f o r m a - tion c a n be r e p r e s e n t e d b y a phrase status element. A p h r a s e status e l e m e n t is a triple of the f o r m (Yk ... Y n , X , F ) w h e r e , f o r s o m e s y m b o l s Y1, ..., Yk-1, there is a g r a m m a r rule of the f o r m X - - > Y1 ... Yk-1 Yk ... Yn, and F is a flag t h a t has one of the values n, e, or p, which s t a n d f o r " n e w " , " e x t e n d i b l e " and " p r o g r e s s i n g " , respectively. Intui- tively, this triple m e a n s that the p h r a s e in question is an X phrase and t h a t it will be c o m p l e t e d w h e n phrases of types Yk t h r o u g h Yn are found. If F = n, the phrase will be a n e w phrase. If F -- e, the phrase is r e a d y for extension into a longer X phrase, but n o n e of the extending p h r a s e c o m p o n e n t s h a v e b e e n f o u n d yet. If F = p, h o w e v e r , s o m e of the e x t e n d i n g p h r a s e c o m p o n e n t s have b e e n f o u n d al- r e a d y and the rest m u s t be f o u n d to c o m p l e t e the phrase. T h e status of being r e a d y f o r extension has to be distinguished f r o m the status of h a v i n g the extension in progress, b e c a u s e it is during the r e a d y status that the decision w h e t h e r to try to e x t e n d the phrase is made.

3.2 T h e A l g o r i t h m

T h e p a r s i n g a l g o r i t h m f o r e x t e n d i n g p h r a s e s is e m b o d i e d in a set of o p e r a t i o n s f o r m a n i p u l a t i n g a list of phrase status elements, called the element list, according to a given modified g r a m m a r and a given s e n t e n c e to be analyzed. Beginning with an e m p t y e l e m e n t list, the p r o c e d u r e is applied b y p e r f o r m i n g the o p e r a t i o n s r e p e a t e d l y until the list contains only the e l e m e n t (,S,n), w h e r e S is the g r a m m a r ' s start symbol, and all of the words in the given s e n t e n c e h a v e b e e n processed.

T h e following are the six o p e r a t i o n s which are applied to the e l e m e n t list:

1. R e p l a c e an e l e m e n t of the f o r m (* Y1 ... Y n , X , F ) with the pair (Y1 ... Y n , X , e ) (,X,F).

2. R e p l a c e an e l e m e n t of the f o r m (,X,p) with the e l e m e n t (Y1 ... Y n , X , e ) if there is a left- recursive rule of the f o r m X - - > X Y1 ... Yn in the g r a m m a r ; if there is no such rule, de- lete the element.

3. R e p l a c e a d j a c e n t p h r a s e status e l e m e n t s of the f o r m (,X,n) (X Y1 ... Y n , Z , F ) with the pair (,X,p) (Y1 ... Y n , Z , F ' ) . If the flag F = e, t h e n F ' = p; otherwise, F ' -- F.

4. R e p l a c e an e l e m e n t of the f o r m (,X,n) with the pair (,X,p) (Y1 ... Yn,Z,n) if there is a g r a m m a r rule of the f o r m Z - - > X Y1 ... Yn in the g r a m m a r , p r o v i d e d the rule is not left- recursive.

5. G e t the next w o r d W f r o m the s e n t e n c e and add the e l e m e n t (,W,n) to the f r o n t (left) e n d of the e l e m e n t list.

6. D e l e t e an e l e m e n t of the f o r m (Y1 ... Y n , X , e ) .

T h e s e o p e r a t i o n s are applied o n e at a time, in arbi- t r a r y order. If m o r e t h a n o n e o p e r a t i o n c a n cause a change in the e l e m e n t list at a n y given time, t h e n one of t h e m is selected for actual application. T h e m a n n e r of s e l e c t i o n is n o t specified h e r e b e c a u s e t h a t is a function of the d a t a structures and p r o c e - dures t h a t would h a v e to be a d d e d to i n c o r p o r a t e the algorithm into a c o m p l e t e s e n t e n c e analysis p r o - gram.

T h e s e o p e r a t i o n s h a v e a fairly simple p u r p o s e : the m a j o r goal is to find n e w p h r a s e s , which are r e p r e s e n t e d b y p h r a s e status e l e m e n t s of the f o r m (,X,n). Initially, b y application of o p e r a t i o n 5, a w o r d of the s e n t e n c e to be a n a l y z e d is m a d e into a new phrase. W h e n a n e w p h r a s e is found, w h e t h e r b y o p e r a t i o n 5 or s o m e o t h e r o p e r a t i o n , there are t w o ways that can be used to find a larger phrase. O n e w a y is to a t t e m p t to find a n e w p h r a s e t h a t begins with the j u s t - f o u n d phrase; this is the pur- p o s e of o p e r a t i o n 4. O n c e an X phrase is found, the e l e m e n t (,X,n) is r e p l a c e d b y ( , X , p ) (Y1 ... Y n , Z , n ) f o r s o m e g r a m m a r rule of the f o r m Z - - > X Y1 ... Yn and s o m e Z different f r o m X. T h e e l e m e n t (Y1 ... Y n , Z , n ) r e p r e s e n t s an a t t e m p t to find a Z phrase, of which the first c o m p o n e n t , the X phrase, has b e e n found.

T h e s e c o n d w a y t h a t c a n be used to m a k e a larg- er phrase is to m a k e the X phrase a c o m p o n e n t of a phrase that has already b e e n started. In o p e r a t i o n 3, the e l e m e n t (,X,n) r e p r e s e n t s a n e w X p h r a s e that c a n be used in this way. A n i m m e d i a t e l y p r e - ceding p h r a s e has b e e n m a d e p a r t of some unfin- ished Z phrase, r e p r e s e n t e d b y the e l e m e n t (X Y1 ... Y n , Z , F ) . Since the s y m b o l X is the first s y m b o l in the first p a r t of this element, the Z p h r a s e can c o n - tain an X phrase at this point in the sentence, so the X p h r a s e is p u t in the Z phrase, and the result of this action is indicated b y the new e l e m e n t s (,X,p) (Y1 ... Y n , Z , F ' ) .

In b o t h o p e r a t i o n s 3 a n d 4, the e l e m e n t (,X,n) itself is replaced b y (,X,p). T h e flag value p indi- cates, in this case, t h a t the X p h r a s e has a l r e a d y b e e n a d d e d to some larger phrase. O p e r a t i o n 2 tries to e x t e n d the X p h r a s e b y c r e a t i n g the e l e m e n t (Y1 ... Y n , X , e ) f o r s o m e l e f t - r e c u r s i v e g r a m m a r rule X - - > X Y1 ... Yn, indicating t h a t the X

(6)

phrase can be extended if it is followed by phrases of types Y1 . . . Yn, in that order. If a Y1 phrase comes next in the sentence, the extension begins by an application of operation 5, which changes the flag from e to p to indicate that the extension is in progress. If there is no Y1 phrase, however, the X phrase cannot be extended, so the element is deleted .by operation 6, allowing the parser to go on to the

next phrase.

In the event a phrase status element has the form (* Y1 ... Yn,X,F), the X phrase can be considered completed. It can also be extended, however, if it is followed by phrases of types Y1 through Yn. Oper- ation 1 creates the new element (,X,F) to indicate the completion of the X phrase, and the new ele- ment (Y1 ... Yn,X,e) to indicate its possible exten- sion. Again, if extension turns out not to be possi- ble, the element can be deleted by operation 6 so that parsing can continue.

3.3 Examples

As examples of how the algorithm works, consid- er how it applies to sentences 1 and 2. Grammar G1 must be modified first, but this consists only of substituting the rule NP - - > NP1 * RelPro VP for the two NP rules in the original grammar, as de- scribed earlier. Starting with an empty element list, the sentence "this is the cat that caught the rat that stole the cheese" is processed as shown in Table 1. By operation 5, the word "this" is removed from the sentence and the element (,this,n) is added to the list. By operation 4, this element is replaced by (,this,p) (,Pro,n), which is shortened to (,Pro,n) by operation 2. By two applications of operations 4 and 2, the element list becomes (,NP,n), then (VP,S,n). The element (,is,n) is now added to the front of the list. This element is changed, by opera- tions 4 and 2 again, to (,V,n) and then to (NP,VP,n). The words " t h e " and "cat" are proc- essed similarly to produce the element list (,N,n) (N,NPI,n) (NP,VP,n) (VP,S,n) at step 20.

At this point, operation 3 is applied to combine (,N,n) with (N,NPI,n), yielding (,N,p) (,NPI,n) in their place. By operation 2, element (,N,p) is re- moved. The first element of the list is now ( , N P I , n ) , which by operation 4 is changed to (,NPI,p) (* RelPro VP,NP,n). When operation 2 is applied this time, because there is a left-recursive grammar rule for NP1 phrases, element (,NPI,p) is replaced by (PP,NPI,e). Operation 1 is applied to the next element to eliminate the special mark * that appears in it, changing the element list to ( P P , N P I , e ) (RelPro VP,NP,e) (,NP,n) (NP,VP,n) (VP,S,n) at step 25.

At this point, the element (,NP,n) represents the NP phrase "the cat". Operation 3 is applied to it

and the next element, and then operation 2 to the result, reducing those two elements to (,VP,n). By operations 3 and 2 again, this element is combined with (VP,S,n) to produce (,S,n), which at this point represents the phrase "this is the cat." If this phrase were the whole sentence, operation 6 could be applied, reducing the element list to (,S,n) and the sentence would be successfully parsed. There are more words in the sentence, however, so other operations are tried.

The next word, " t h a t " , is processed by opera- tions 5, 4 and 2 to add (,RelPro,n) to the front of the list. Since the grammar does not allow a PP phrase to begin with the word " t h a t " , operation 6 is applied to eliminate the element ( P P , N P I , e ) , which represents the possibility of an extension of an NP1 phrase by a PP phrase. The next element, (RelPro VP,NP,e), represents the possibility of an extension of an NP phrase when it is followed by a RelPro phrase, however, so operations 3 and 2 are applied, changing the element list to (VP,NP,p) (,S,n). Note that the flag value e has changed to p; this means that a VP phrase m u s t be found now to complete the NP phrase extension or this sequence of opera- tions will fail.

By. continuing in this fashion, the sentence is parsed. Since no new details of how the algorithm works would be illustrated by continuing the narra- tion, the continuation is omitted.

The sentence "the rat stole the cheese in the pantry by the bread" is parsed in a similar fashion. The only detail that is significantly different from the previous sentence is that after the element (RelPro VP,NP,e) is deleted by operation 6, instead of ( P P , N P I , e ) , a new situation occurs, in which a phrase can attach to one of s e v e r a l phrases waiting to be extended. The situation occurs after the sen- tence corresponding to the phrase "the rat stole the cheese" is represented by the element (,S,n) when it first appears on the element list. When the PP phrase "in the p a n t r y " is found, the element ( P P , N P I , e ) changes to (,NPI,p), indicating that the NP1 phrase "the cheese" has been extended to "the cheese in the pantry". By operation 2, the element (,NPI,p) is changed to ( P P , N P I , e ) so that the NP1 phrase can be extended again. But "the pantry" is an NP1 phrase also, which means that an element of the form ( P P , N P I , e ) has been created to extend it

(7)

Daniel Chester A Parsing Algorithm That Extends Phrases

STEP

I

2

3

4

5

6

7

8

9

I0

11

12

13

14

15

16

17

18

19

2O

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

ELEMENT LIST O P E R A T I O N

(,the,n)

(,the,p) (,Det,n)

(,Det,n)

(,Det,p) (N,NPI,n)

(N,NPI,n)

(,cat,n) (N,NPI.,n)

(,cat,p) (,N,n) (N,NPI,n)

(,N,n) (N,NPI,n)

(,N,p) (,NPl,n)

(,NPl,n)

(,NPl,p) (* RelPro VP,NP,n)

(PP,NPI,e) (* RelPro VP,NP,n)

(PP,NPI,e) (RelPro VP,NP,e) (,NP,n) (NP,VP,n)

(PP,NPl,e) (RelPro VP,NP,e) (,NP,p) (,VP,n)

(,this,n) 5

(,this,p) (,Pro,n) 4

(,Pro,n) 2

(,Pro,p) (,NP,n) 4

(,NP,n) 2

(,NP,p) (VP, S,n) 4

(VP,S,n) 2

(,is,n) (VP,S,n) 5

(,is,p) (,V,n) (VP,S,n) 4

(,V,n) (VP, S,n) 2

(,V,p) (NP,VP,n) (VP,S,n) 4

(NP,VP, n) (VP, S,n) 2

(NP,VP,n) (VP,S,n) 4

(NP,VP,n) (VP,S,n) 2

(NP,VP,n) (VP,S,n) 4

(NP,VP,n) (VP,S,n) 4

(NP,VP,n) (VP,S,n) 2

(NP,VP,n) (VP,S,n) 5

(NP,VP,n) (VP,S,n) 4

(NP,VP,n) (VP,S,n) 2

(NP,VP,n) (VP,S,n) 3

(NP,VP,n) (VP,S,n) 2

(NP,VP,n) (VP,S,n) 4

(NP,VP,n) (VP,S,n) 2

(VP,S,n) I

(VP,S,n) 3

(PP,NPI,e) (RelPro VP,NP,e) (,VP,n) (VP,S,n)

(PP,NPI,e) (RelPro VP,NP,e) (,VP,p) (,S,n)

(PP,NPl,e) (RelPro VP,NP,e) (,S,n)

(,that,n) (PP,NPI,e) (RelPro VP,NP,e) (,S,n)

(,that,p) (,RelPro,n) (PP,NPI,.e) (RelPro VP,NP,e) (,S,n)

(,RelPro,n) (PP,NPI,e) (RelPro VP,NP,e) (,S,n)

(,RelPro,n) (RelPro VP,NP,e) (,S,n)

(,RelPro,p) (VP,NP,p) (,S,n)

(VP, NP,p)(,S,n)

etc.

2

3

2

5

4

2

6

3

2

Table 1. Trace of parsing algorithm on sentence 1.

m e a n s that the intervening e l e m e n t ( P P , N P I , e ) at- t e m p t i n g to e x p a n d the NP1 p h r a s e " t h e p a n t r y " has to be deleted.

3.4 C o n s t r a i n t s on t h e O p e r a t i o n s

The algorithms for top-down, bottom-up and

l e f t - c o r n e r parsing are usually p r e s e n t e d so that all o p e r a t i o n s are p e r f o r m e d on the top of a stack that c o r r e s p o n d s to our e l e m e n t list. We h a v e not c o n - strained o u r algorithm in this w a y b e c a u s e to do so would p r e v e n t the desired closure behavior. In p a r -

ticular, in the s e n t e n c e "this is the cat t h a t caught the rat t h a t stole the c h e e s e , " the NP p h r a s e " t h e c a t " would not c o m b i n e into the p h r a s e "this is the c a t " until the rest of the s e n t e n c e was a n a l y z e d if such a c o n s t r a i n t were enforced. This is b e c a u s e o p e r a t i o n 1 would create an e l e m e n t for e x t e n d i n g the N P p h r a s e that would h a v e to be disposed of first b e f o r e the e l e m e n t ( , N P , n ) , c r e a t e d also b y t h a t o p e r a t i o n , could c o m b i n e with a n y t h i n g else. Thus, constraining the o p e r a t i o n s to a p p l y only to the f r o n t end of the e l e m e n t list would nullify the a l g o r i t h m ' s purpose.

[image:7.612.107.484.90.552.2]
(8)

Our algorithm can be viewed as a modification of the l e f t - c o r n e r parser. In fact, if a grammar is not

modified b e f o r e use with our algorithm, and if oper- ation 4 is not restricted to non-left-recursive rules, and if operation 2 is modified to delete all elements of the form (,X,p), t h e n our algorithm would actual- ly be a l e f t - c o r n e r parser.

3.5 Experimental Program

The algorithm has b e e n p r e s e n t e d here in a sim- ple f o r m to make its exposition easier and its oper- ating principles clearer. W h e n it was tried out in an experimental program, several techniques were used to make it work more efficiently. F o r example, operations 1 and 2 were c o m b i n e d with the o t h e r operations so that they were, in effect, applied as soon as possible. O p e r a t i o n 3 was also applied as soon as possible. T h e o t h e r operations c a n n o t be given a definite o r d e r f o r their application; that must be d e t e r m i n e d b y the n o n - s y n t a c t i c p r o c e d u r e s that are added to the parser.

A n o t h e r technique increased efficiency b y apply- ing operation 4 only w h e n the result is consistent with the grammar. Suppose grammar G1 c o n t a i n e d the rule Det - - > that as well as the rule R e l P r o - - > that. W h e n the word " t h a t " in the s e n t e n c e "this is the cat that caught the rat that stole the c h e e s e " is processed, the e l e m e n t list contains the triple (,that,n) ( P P , N P I , e ) ( R e l P r o V P , N P , e ) at one point. T h e grammar permits o p e r a t i o n 4 to be ap- plied to (,that,n) to p r o d u c e e i t h e r ( , D e t , n ) or (,RelPro,n), but only the latter can lead to a suc- cessful parse because the grammar does not allow k 0 ~ ] o r either a PP phrase or a R e l P r o phrase to begin V ~ , ~ ' x with a Det phrase. T h e technique for guiding oper- .x' f f ~ \ a t i o n 4 so that it produces only useful elements con- ~ < ~ \ s i s t s of computing b e f o r e h a n d all the phrase types \ < h ~ ~that can be-gin e a c h phrase. T h e n - t h e - f o l l o w i n g "~,,k v "operation is used ~n p~ace ~ o p e r a t i o n 4:

4'a. If an e l e m e n t of the f o r m (,X,n) is at the (right) e n d of the list, replace it with the pair (,X,p) (Y1 ... Yn,Z,n) w h e n there is a grammar rule of the f o r m Z - - > X Y1 ... Yn in the grammar, p r o v i d e d the rule is not left-recursive.

4'b. If an element of the f o r m (,X,n) is followed by an element of the f o r m (U1 ... U m , V , F ) , replace (,X,n) with the pair (,X,p) (Y1 ... Yn,Z,n) w h e n there is a grammar rule of the f o r m Z - - > X Y1 ... Yn in the grammar, p r o v i d e d the rule is not left-recursive, and a Z phrase can begin a U1 phrase or Z = U1.

It is sufficient to consider only the first e l e m e n t after an element of the form (,X,n) because if oper- ation 4'b c a n n o t be applied, either that first element

can be deleted by o p e r a t i o n 6 or the parse is going to fail anyway. Thus, in our example above, opera- tion 6 can be used to delete the element ( P P , N P I , e ) so that o p e r a t i o n 4'b can be applied to (,that,n) (RelPro V P , N P , e ) . This technique is essentially the same as the selective filter techniqueo described by Griffiths and Petrick ( 1 9 6 5 ) for l e f t - c o r n e r parsing ~ g o r i t h m , in their terminology).

A n o t h e r t e c h n i q u e increased e f f i c i e n c y f u r t h e r b y postponing the decision a b o u t which of several grammar rules to apply via operations 3 or 4' for as long as possible. T h e g r a m m a r rules were stored in Lisp list structures so that rules having the same left-hand side and a c o m m o n initial segment in their right-hand side shared a c o m m o n list structure, for example, if the grammar consists of the rules

X - > Y Z U

X - > Y Z V

W - > Y Z U

these rules are stored as the list structure

( (x (z (u)

( v ) ) )

(w (z ( u ) ) ) )

which is stored on the p r o p e r t y list for Y. The com- m o n initial segment shared b y the first two rules is r e p r e s e n t e d b y the same path to the atom Z in the list structure. T h e c o m p o n e n t (X (Z (U) ( V ) ) ) in this list structure means that a Y phrase can be fol- lowed b y a Z phrase and t h e n either a U phrase or a V phrase to make an X phrase. W h e n a Y phrase is found, and it is decided to try to find an X phrase, this c o m p o n e n t makes it possible to look for a Z phrase, but it p o s t p o n e s the decision as to w h e t h e r the Z phrase should be followed by a U phrase or a V phrase until after the Z phrase has b e e n found. T h e l e f t - h a n d sides (X and W) of the rules are listed first to facilitate o p e r a t i o n 4'b. This techni- que is similar to a technique used b y Irons ( 1 9 6 1 ) and described by Griffiths and Petrick (1965).

4. Related W o r k

T h e r e are two s e n t e n c e analysis programs with parsing algorithms that resemble ours in m a n y ways, though theirs have b e e n described in terms that are intimately tied to the particular semantic and syn- tactic r e p r e s e n t a t i o n s used b y those programs. T h e p r o g r a m s are P A R S I F A L , b y M a r c u s ( 1 9 7 6 ,

(9)

Daniel Chester A Parsing Algorithm That Extends Phrases

4.1 PARSIFAL (Marcus)

T h e basic s t r u c t u r a l unit of P A R S I F A L is the node, which c o r r e s p o n d s a p p r o x i m a t e l y to o u r phrase status element. N o d e s are k e p t in two data structures, a p u s h d o w n stack called the active node stack, a n d a t h r e e - p l a c e constituent buffer. ( T h e b u f f e r actually has five places in it, but the p r o c e - dures that use it w o r k with only three places at a time.) T h e g r a m m a r rules are e n c o d e d into rule packets. Since the o r g a n i z a t i o n of these p a c k e t s has to do with the efficient selection of a p p r o p r i a t e g r a m m a r rules and the invocation of p r o c e d u r e s for adding structural details to nodes, the p r o c e d u r e s we w a n t to ignore while looking at the parsing algor- ithm of P A R S I F A L , we will ignore the rule p a c k e t s in this c o m p a r i s o n . T h e essential f a c t a b o u t rule p a c k e t s is that t h e y e x a m i n e only the t o p n o d e of the stack, the S or N P n o d e n e a r e s t the top of the stack, and up to three nodes in the buffer.

The basic o p e r a t i o n s that can be p e r f o r m e d on these structures i n c l u d e attaching the first n o d e in the b u f f e r to the top n o d e of the stack, which c o r r e - sponds to o p e r a t i o n 3 in our algorithm, creating a n e w node that holds one or m o r e of the nodes in the b u f f e r , which c o r r e s p o n d s to o p e r a t i o n 4, and reac- tivating a n o d e (pushing it o n t o the stack) that has already b e e n a t t a c h e d to a n o t h e r n o d e so t h a t m o r e nodes can be a t t a c h e d to it, which c o r r e s p o n d s to the phrase extending o p e r a t i o n s 1,2, and 6. P A R S I - F A L has o n e o p e r a t i o n t h a t is not similar to the o p e r a t i o n s of o u r algorithm, which is t h a t it c a n create nodes for d u m m y N P phrases called traces. T h e s e nodes are i n t e n d e d to p r o v i d e a w a y to ac- count for p h e n o m e n a that would otherwise require t r a n s f o r m a t i o n a l g r a m m a r rules to explain them. O u r algorithm does not allow such an operation; if such an o p e r a t i o n should p r o v e to be n e c e s s a r y , h o w e v e r , it would n o t be h a r d to add, or its e f f e c t could be p r o d u c e d b y the p r o c e d u r e s called.

O n e of the benefits of having a b u f f e r in P A R S I - F A L is that the b u f f e r allows for a kind of look- a h e a d b a s e d on phrases instead of just words. Thus the decision a b o u t w h a t g r a m m a r rule to a p p l y to the first n o d e in the b u f f e r c a n be b a s e d on the phrases that follow a certain point in the s e n t e n c e u n d e r analysis instead of just the words. T h e s y s t e m c a n l o o k f u r t h e r a h e a d this w a y a n d still k e e p a strict limit on the a m o u n t of l o o k - a h e a d available. We can get a similar e f f e c t with our algorithm if we restrict the application of its o p e r a t i o n s to the first four or five phrase status e l e m e n t s in the e l e m e n t list. In a sense, the first five elements of the list c o r r e s p o n d to the b u f f e r in P A R S I F A L and the rest of the list c o r r e s p o n d s to the stack. In fact, in a r e c e n t m o d i f i c a t i o n of P A R S I F A L ( S h i p m a n a n d M a r c u s 1979) the b u f f e r and stack were c o m b i n e d into a single data structure closely r e s e m b l i n g our e l e m e n t list.

4.2 R i e s b e c k ' s A n a l y z e r

T h e basic structural unit of R i e s b e c k ' s a n a l y z e r is the Conceptual Dependency structure as d e v e l o p e d b y Schank ( 1 9 7 3 , 1 9 7 5 ) . A C o n c e p t u a l D e p e n d e n c y structure is i n t e n d e d to r e p r e s e n t the m e a n i n g of a word, p h r a s e or sentence. T h e details of w h a t a C o n c e p t u a l D e p e n d e n c y structure is will not be dis- cussed here.

The m o n i t o r in R i e s b e c k ' s a n a l y z e r has no list or stack on which o p e r a t i o n s are p e r f o r m e d ; instead, it has s o m e global variables that serve the s a m e pur- pose. O n l y a f e w of these v a r i a b l e s c o n c e r n us here. The variable W O R D holds the current w o r d being l o o k e d at and c a n be t h o u g h t of as the f r o n t e l e m e n t of our e l e m e n t list. T h e variable S E N S E holds the sense of W O R D or of the n o u n p h r a s e of which W O R D is the head. It is like the second ele- m e n t in o u r list. T h e equivalent to the third ele- m e n t in our list is the variable R E Q U E S T S , which holds a list of p a t t e r n - a c t i o n rules. T h e r e are s o m e o t h e r variables (such as A R T - I N T ) t h a t on occasion serve as the f o u r t h e l e m e n t of the list.

Unlike the c o n t r o l l e r s in m a n y o t h e r analysis p r o g r a m s , R i e s b e c k ' s m o n i t o r is n o t driven explicitly b y a g r a m m a r . Instead, the syntactic i n f o r m a t i o n it uses is buried in the p a t t e r n - a c t i o n rules a t t a c h e d to each w o r d and w o r d sense within his p r o g r a m ' s lexi- con. T a k e , for e x a m p l e , the c o m m o n sense of the v e r b " g i v e " : o n e p a t t e r n - a c t i o n rule says t h a t if the v e r b is followed b y a n o u n p h r a s e d e n o t i n g a p e r - son, the sense of that p h r a s e is put in the recipient case slot of the verb. A n o t h e r p a t t e r n - a c t i o n rule says that if a following n o u n p h r a s e d e n o t e s an o b - ject, the sense of the p h r a s e is put in the o b j e c t case slot of the verb. T h e s e p a t t e r n - a c t i o n rules corre- s p o n d to having g r a m m a r rules of the f o r m VP - - > give, and VP - - > VP NP, where the p a t t e r n - a c t i o n rules describe t w o different ways that a VP phrase and an N P p h r a s e can c o m b i n e into a VP phrase. T h e r e is a third p a t t e r n - a c t i o n rule that changes the current sense of the w o r d " t o " in case it is e n c o u n - tered later in the sentence, but that is o n e of the actions that occurs b e l o w the syntactic level.

N o u n phrases are t r e a t e d b y R i e s b e c k ' s m o n i t o r in a special way. U n m o d i f i e d nouns are considered to be n o u n phrases directly, but phrases beginning with an article or adjective are h a n d l e d b y a special subroutine that collects the following adjectives and nouns before building the c o r r e s p o n d i n g C o n c e p t u a l D e p e n d e n c y structure. O n c e the whole n o u n p h r a s e is found, the m o n i t o r e x a m i n e s the R E Q U E S T S list to see if there are a n y p a t t e r n - a c t i o n rules that can use the n o u n phrase. If so, the associated action is t a k e n and the rule is m a r k e d so that it will not be used twice. T h e m o n i t o r is s t a r t e d with a p a t t e r n - action rule on the R E Q U E S T S list that puts a b e - ginning n o u n p h r a s e in the s u b j e c t case slot of w h a t e v e r v e r b t h a t follows. ( T h e r e are provisions

(10)

to reassign structures to different slots if later words of the s e n t e n c e require it.)

It can be seen t h a t R i e s b e c k ' s analysis p r o g r a m w o r k s essentially b y p u t t i n g n o u n p h r a s e s in the c a s e slots of verbs as the phrases are e n c o u n t e r e d in the s e n t e n c e u n d e r analysis. In a syntactic sense, it builds a p h r a s e of t y p e sentence (first n o u n p h r a s e plus v e r b ) and t h e n e x t e n d s that p h r a s e as f a r as possible, m u c h as o u r a l g o r i t h m d o e s using left- recursive g r a m m a r rules. P r e p o s i t i o n s and c o n n e c - tives b e t w e e n simple sentences c o m p l i c a t e the p r o c - ess s o m e w h a t , but the process is still similar to ours at the highest level of p r o g r a m control.

5. C o n c l u d i n g R e m a r k s

Since our parsing algorithm deals only with syn- tax, it is n o t c o m p l e t e in itself, b u t c a n b e c o m b i n e d with a v a r i e t y of c o n c e p t u a l r e p r e s e n t a t i o n s to m a k e a s e n t e n c e analyzer. W h e n e v e r an o p e r a t i o n t h a t c o m b i n e s p h r a s e s is p e r f o r m e d , p r o c e d u r e s to c o m - bine d a t a structures c a n be called as well. W h e n there is a choice of o p e r a t i o n s to be p e r f o r m e d , p r o - cedures to m a k e the choice b y examining the d a t a structures involved can be called, too. B e c a u s e our a l g o r i t h m c o m b i n e s p h r a s e s s o o n e r t h a n o t h e r s , there is g r e a t e r o p p o r t u n i t y f o r the d a t a structures to influence the p r o g r e s s of the p a r s i n g p r o c e s s . This m a k e s the resulting s e n t e n c e a n a l y z e r b e h a v e n o t only in a m o r e h u m a n - l i k e w a y (the closure p r o p e r t y ) , b u t also in a m o r e efficient w a y b e c a u s e it is less likely to h a v e to b a c k up.

A l t h o u g h the p r o g r a m s of M a r c u s a n d R i e s b e c k share m a n y of these s a m e properties, the syntactic processing aspects of those p r o g r a m s are n o t clearly s e p a r a t e d f r o m the particular c o n c e p t u a l r e p r e s e n t a - tions on which t h e y are based. W e believe t h a t the parsing algorithm p r e s e n t e d h e r e c a p t u r e s m a n y of the i m p o r t a n t p r o p e r t i e s of those p r o g r a m s so t h a t t h e y m a y be applied to c o n c e p t u a l r e p r e s e n t a t i o n s b a s e d on o t h e r theories of n a t u r a l language.

R e f e r e n c e s

Aho, A. and UUman J., 1972. The Theory of Parsing, Translation

and Compiling, Vol. 1, Prentice-Hall Inc., New Jersey.

Chomsky, N. 1965. Aspects of the Theory of Syntax, MIT Press,

Cambridge, Mass.

Griffiths, T. and Petrick S., 1965. On the relative efficiencies of

context-free grammar recognizers. CACM 8, May, 289-300.

Grishman, R., 1975. A Survey of Syntactic Analysis Procedures

for Natural Language. Courant Computer Science Report

#8, Courant Institute of Mathematical Sciences, New York

University, August. Also appears on AJCL Microfiche 47,

1976.

Heidorn, G., 1972. Natural language inputs to a simulation

programming system. Report No. NPS-55HD72101A, Naval Postgraduate School, Monterey, Calif., October.

Irons, E., 1961. A syntax directed compiler for A L G O L 60.

CACM 4, January, 51-55.

Kaplan, R., 1972. Augmented transition networks as psycholog-

ical models of sentence comprehension. Artificial Intelligence

3, 77-100.

Kimball, J., 1973. Seven principles of surface structure parsing

in natural language. Cognition 2, 15-47.

Lyons, J., 1970. Chomsky. Fontana/Collins, London.

Marcus, M., 1976. A design for a parser for English. ACM '76

Proceedings, Houston, Texas, Oct 20-22, 62-68.

Marcus, M., 1978a. Capturing linguistic generalizations in a

parser for English. Proceedings of the Second National Con-

ference of the Canadian Society for Computational Studies of Intelligence, University of Toronto, Toronto, Ontario, July

19-21, 64-73.

Marcus, M., 1978b. A computational account of some const-

raints on language. Theoretical Issues in Natural Language

Processing-2, D. WaitS, general chairman, University of Illi- nois at Urbana-Champaign, July 25-27,236-246.

Novak, G., 1976. Computer understanding of physics problems

stated in natural language. AJCL microfiche 53.

Parkison, R., Colby, K. and Faught, W., 1977. Conversational language comprehension using integrated pattern-matching

and parsing. Artificial Intelligence 9, October, 111-134.

Riesbeck, C., 1975a. Computational understanding. Theoretical

Issues in Natural Language Processing, R. Schank and B. Nash-Webber, eds., Cambridge, Mass., June 10-13, 11-16. Riesbeck, C., 1975b. Conceptual analysis. In Schank (1975),

83-156.

Sehank, R., 1973. Identification of conceptualizations underly-

ing natural language. In Schank R. and Colby, K., eds.,

Computer Models of Thought and Language, W. H. Freeman and Company, San Francisco.

Schank, R., 1975. Conceptual Information Processing. American

Elsevier, New York.

Shipman, D., and Marcus, M., 1979. Towards minimal data

structures for deterministic parsing. IJCAI-79, Tokyo, Aug

20-23, 815-817.

Wanner, IE., and Maratsos, M., 1978. Linguistic Theory and

Psychological Reality, M. Halle, J. Bresnan, G. Miller, eds., MIT Press, Cambridge, Mass., 119-161.

Winograd, T., 1972. Understanding Natural Language. Academic

Press, New York.

Woods, W., 1970. Transition network grammars for natural

language analysis. CACM 13, October, 591-606.

Figure

Table 1. Trace of parsing algorithm on sentence 1.

References

Related documents

as well as study conducting by the vision institute for civil society in 2014 on 270 companies spread over a score of eleven economic sectors to identify the drivers

452 Such a suggestion could be supported by the requirement of ‘fairness’ in awards made by the Financial Ombudsmen Services (henceforth FOS). Within the legal context

The paper provides evidence that the long run relation with short run dynamics between carbon emission intensity, technological progress and economic growth using the US

Blood flow was quantified by determining the diameter (D), the time-averaged maximum velocity (TAMV), the pulsatility index (PI) and the blood flow volume (BFV) of

If political power-holders such as regulatory agencies may be tempted to misuse and abuse their pub- lic authority, if these activities are associated with worse performance, and

These have been achieved that the electron properties of the resultant heterostructures vary strongly,and a p- type layer with good mobility and a band gap of 0.07eV is

8204 Highway 311 Sellersburg, IN 47172 (812) 246-3301 www.ivytech.edu/sellersburg Accounting Advanced Manufacturing Business Administration Computer Information Systems

This research studies the implementation of Semantic Mapping Strategy (SMS) in teaching reading comprehension to second year students from a junior high school in