• No results found

Entity Extraction for Malayalam Social Media Text Using Structured Skip-gram Based Embedding Features from Unlabeled Data

N/A
N/A
Protected

Academic year: 2021

Share "Entity Extraction for Malayalam Social Media Text Using Structured Skip-gram Based Embedding Features from Unlabeled Data"

Copied!
7
0
0

Loading.... (view fulltext now)

Full text

(1)

Procedia Computer Science 93 ( 2016 ) 547 – 553

1877-0509 © 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Peer-review under responsibility of the Organizing Committee of ICACC 2016 doi: 10.1016/j.procs.2016.07.276

ScienceDirect

6th International Conference On Advances In Computing & Communications, ICACC 2016, 6-8

September 2016, Cochin, India

Entity Extraction for Malayalam Social Media Text using Structured

Skip-gram based Embedding Features from Unlabeled Data

Remmiya Devi G

a,∗

, Veena P V

a

, Anand Kumar M

a

, Soman K P

a

aCentre for Computational Engineering and Networking (CEN)

Amrita School of Engineering, Coimbatore Amrita Vishwa Vidyapeetham, Amrita University, India

Abstract

Social media text is generally informal and noisy but sometimes tends to have informative content. Extracting these informative content such as entities is a challenging task. The main aim of this paper is to extract entities from Malayalam social media text efficiently. The social media corpus used in our system is from FIRE2015 entity extraction task. This data is initially subjected to pre-processing and feature extraction and then proceeds with entity extraction. Apart from the conventional stylometric features like prefixes, suffixes, hash tags etc., and POS tags, unsupervised word embedding features obtained from Structured Skip-gram model are utilized to train the system. The extracted features is given to the Support vector machine classifier to build and train model. Testing of the system resulted in better accuracy than the existing systems evaluated in FIRE2015 tasks. Unsupervised features retrieved using Structured Skip-gram model contributes to the reason for achieving better performance.

c

 2016 The Authors. Published by Elsevier B.V.

Peer-review under responsibility of the Organizing Committee of ICACC 2016.

Keywords: Entity extraction, Social media text, SVM classifier, Structured Skip-gram model, Stylometric features, Unsupervised word embedding features

1. Introduction

Entity extraction in English language is simple, because of the organized and defined format of the text. In case of Indian languages like Malayalam, Tamil, Hindi etc., the text prevails in undefined format from which entity extraction is a tedious process. An important application of Natural Language Processing is entity extraction which retrieves information such as names of person, place and organization from the text corpus.

This paper deals with entity extraction for social media text in Malayalam language using Structured Skip-gram based embedding features. Another name for describing Structured Skip-gram model (SSG) is wang2vec1. The

evolution of SSG is from its original word embedding tool word2vec2. The major issue with word2vec model is that, it is insensitive to word order. The tool overcomes this issue using either Continuous Bag of Words model or Structured Skip-gram model. In our task, we have considered Structured Skip-gram based features from unlabeled

Remmiya Devi G. Tel.:+918870488586

E-mail address: [email protected] (Remmiya Devi G)

© 2016 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

(2)

text. The unsupervised features are literally vector representation of each word in the text data. From the input data, stylometric features are acquired using usual feature extraction procedures. Another set of features are unsupervised features obtained after training the input data using SSG. The system utilizes data sets from twitter and twitter post from FIRE2015 entity extraction task. The training and testing of data sets is done using a SVM-based classifier.

Entity extraction in Indian languages is difficult, since the language evolves with time and new words are added in vocabulary frequently. On the other hand, entity extraction is the most trending methodology in Natural Language Processing. Several research works are carried out in this domain. Supervised methods were implemented for Malay-alam language using Trigrams n Tags (TnT) based on Viterbi algorithm and HMM2. A machine learning technique

called Conditional Random Fields (CRF) is implemented for entity extraction for Indian Languages3,4. Named Entity

Recognition for Indian languages is done using rich features5. SVM has proven itself as a successful classifier used

in various natural language processing tasks. SVM based approaches are widely applied for Indian languages6. A

hybrid technique called Maximum Entropy Model is another approach used for entity extraction7. A learning task

called Zero-shot is implemented to perform entity extraction from webpages8,9. An experimental study is made based

on Pattern Learning, Subclass extraction and List extraction for unsupervised named entity extraction from the web10. A large scale system is presented for recognition and semantic disambiguation for named entities based on informa-tion extracted from web search results11. An overview of approaches followed in FIRE tasks is presented for entity

extraction in social media text for Indian languages12.

Section 2 describes the background description of the paper i.e., the model used in the proposed system - Structured Skip-gram model. The methodology used by system and the feature description is discussed in Section 3. Section 4 includes the experimental results and analysis of the proposed system. Conclusion of the paper is given in Section 5. 2. Background

The system performs entity extraction task by including unsupervised features known as word embedding features. Social media corpus utilized in this system is from the data set released by FIRE2015 entity extraction task.

2.1. Structured Skip-gram Model

Structured Skip-gram model plays a significant role in language modeling by performing coverage of trigrams of text data. Basic demonstration of SSG model is given in Fig. 1

Fig. 1: Schematic diagram of Structured Skip-gram Model

Given the center word, SSG model predicts the surrounding words associated with the context as given in Eq.(1). Here the context refers to the center word.

O∈ X−c... X−1, X1,... Xc (1)

In SSG model, word position is taken into consideration. The input word (context) is generated in one hot format. X(i) represents ithword from Vocabulary V. X(1)is input word matrix.

Embedded word vectors are retrieved for the context by Eq.(2)

(3)

where, u(i) represents ithcolumn of X(1), with input vector representation of word X(i). u(i) is directly mapped to h.

Score vectors mentioned below are generated using Eq.(3)

V = X(2)h (3)

where, X(2)is the output word matrix. V (i) represents ithrow of X(2), the output vector representation of word X(i).

Scores obtained above are converted to probability values by softmax classifier obtained from the Eq.(4),

y= so f t max (V) (4)

The maximum value out of these probability values has the nearest distance to the given word (context). The word corresponding to that distance is taken. The output predictor matrix predicts the output for the relative position to the center word.

3. Entity Extraction for Malayalam

3.1. Methodology

The steps involved in the proposed system are illustrated in Fig. 2. Both train data and the test data are subjected to BIO-formatting. In the BIO- formatted text file, the entities are marked as B if it marks the beginning of an entity type. It will be followed by an I tag if it marks the continuation of that entity type. Other tags are marked as O. To perform POS tagging, Amrita POS tagger is used. After the completion of pre-processing steps, the system continues with feature extraction. The features such as Entity tag, POS tag, Prefixes, Suffixes, Hash tag etc., are extracted.

(4)

Here, we used Structured Skip-gram model for generating word embedding features from unlabeled social media corpus in malayalam. The model results with the nearest output vector representation of input word.

The corpus is given to the Structured Skip-gram model. The output from the model will result with vectors, ob-tained for each token. These are unsupervised features that are integrated with the stylometric features13. SVM based

classifier - SVM-Light14is used to build and classify the above mentioned features and train the model. Accuracy

ob-tained shows better performance than the previous works done on entity extraction for social media text in Malayalam language.

3.2. Feature Description

Feature extraction is an important step in NER task. The stylometric features like entity tag, shape features, POS tags, prefixes and suffixes and so on are used. The unsupervised features obtained from Structured Skip-gram model are also used15. They are added to improve the accuracy of the system. The stylometric features added to the system

are as follows.

Entity Tag: The entity representing corresponding to that particular word is used as feature. If it is not an entity then

it is marked as 0.

POS Tag: Malayalam POS tags are retrieved from the in house Malayalam POS tagger.

Prefixes - Suffixes: The first and last 3, 4 and 5 characters each are used as features since they help in identifying

entities.

End features: The presence of (. ? ! ,) are also used as features. If a word ”MALABAR?” is present, then the

corresponding feature will be 0100.

Number, Punctuation and Hash tag: This denotes the presence of numbers, punctuation and hash tag in the word. For

example, if the word contains any digit, then it is marked as 1, else 0.

Length& position: Length of the word and the position of that word in the tweet are also considered as features. Unsupervised Structured Skip-gram word embedding features: These features are the vector representation of words.

They are acquired from Structured Skip-gram model. The final feature set includes both stylometric and unsupervised features.

4. Experimentation and Results

Cross validation is done for the training data of FIRE2015 task. Here, 10% of training data is considered as testing data and it is given to the classifier. The efficiency of the system is evaluated in terms of Precision, Recall and F-measure and the same is compared with the existing results of FIRE2015 entity extraction task12.

4.1. Dataset Description

Malayalam social media text is considered as the data set for the proposed system. A part of the data set is taken from training data set released by FIRE2015 task. The remaining data is collected from twitter resources16. The size

of the raw corpus and train corpus is tabulated in Table 1. The purpose of the system is to extract the entities from the text based on Structured Skip-gram model. The system is modeled using SVM based classifier. Around 33 entities are present in our corpus, some of which are Person, Location, Organization, Materials, Date and so on. The corpus also includes URL. Majority of the tweets are news contents in which the leading topics are politics and film.

Table 1: Size of Unlabeled Corpus for SSG model

Number of Utterances Token Count Average Tokens per Utterance

FIRE Training Corpus 8426 76459 9.07

Tweet Corpus 21348 179277 8.39

Total Corpus 29774 255736 8.59

The description of the size of dataset used in our system is mentioned in Table 2.

SVM-Light is used for training the model14. For evaluating, the classifier is provided with the gold standard data

(5)

Table 2: Size of Dataset

Number of Utterances Token Count Average Tokens per Utterance

Train Corpus 7658 69684 9.09

Test Corpus 768 6775 8.82

Total Corpus 8426 76459 9.07

4.2. Results

The entity extraction for Malayalam social media corpus with stylometric and unsupervised features is trained and tested with svm-based classifier. The accuracy for Known, Known Ambiguity, Unknown and accuracy resulted during cross validation is tabulated in Table 3.

Table 3: Cross Validation Results (in %)

Known Tokens Known Ambiguous Unknown Tokens Accuracy

91.36 63.2 82.32 89.64

It is evident from the table that the known tokens has good accuracy compared to known ambiguous tokens. An interesting inference that can be made from the table is that the unknown tokens has better accuracy than known ambiguous. The accuracy for Known, Known Ambiguity, Known Unambiguity and Unknown after testing the system is tabulated in Table 4. The test data contains 82.05% known tokens and 17.95% unknown tokens.

Table 4: Accuracy of Known Vs Unknown Tokens

Tokens Trials Hits Accuracy

Known 6819 6243 91.553

Known Ambiguous 5606 5436 66.5293

Known Unambiguous 1213 807 96.9675

Unknown 1492 1223 81.9705

Total 8311 7466 89.93

The performance of the system is evaluated in terms of the precision, recall and F-measure and the same is shown in Table 5. The accuracy mentioned in this table, is the overall accuracy of the system.

Table 5: Evaluation results of the system - Precision, Recall, F-measure

Precision Recall F-measure

Baseline System12 62.58 20.82 31.24

Anand Kumar et. al.6 51.18 40.29 45.08

Sanjay et. al.3 60.05 39.94 47.97

Proposed System 93.33 72.41 81.55

In Table 5, since we don’t have gold standard test data of FIRE2015 tasks, we considered 10% of the training corpus as testing corpus and evaluated the system. Here the FIRE baseline accuracy is based on the test data provided by organizers. Recall is defined as the ratio between number of relevant tags in the document extracted by the proposed system to the total number of possible relevant tags present in the document and the same is given in Eq.(5). Recall plays a unique role in locating the relevant tags from the entire data set.

Recall(R) = Number o f relevant tags

T otal number o f possible relevant tags (5)

Another metric called Precision is defined as the ratio between number of relevant tags to the number of retrieved tags in the document given by Eq.(6).

(6)

Precision (P) = Number o f relevant tags

Number o f retreived tags (6)

The combined measure that evaluates both Precision and Recall is F-measure. Mathematically, it is given by Eq.(7)

F− measure (F) = 2PR

(P+ R) (7)

Fig. 3: Entities and corresponding accuracy

The accuracy of individual entities in our system and their corresponding accuracy is graphically shown in Fig. 3. It shows the accuracy in percentage of correctly extracted entities from the test data. From the graph, it can be seen that I-MONEY and B-MONEY has the highest accuracy while entities like B-ORGANISATION and ENTERTAINMENT possess lower accuracies. In case of numeric texts, accuracy is more in case of MONEY and B-DATE while is low for I-COUNT and COUNT. It can be observed that, the tag ORGANISATION has 40% accuracy while that of B-ORGANISATION and I-B-ORGANISATION is very low. The accuracy for LOCATION is high as to compared to the tags I-LOCATION and B-LOCATION. As far as PERSON tag is concerned, B-PERSON has better accuracy than I-PERSON.

5. Conclusion

In this paper, Entity extraction using Structured Skip-gram model is done for Malayalam language. The system is developed with data set from FIRE2015 entity extraction task. Different types of entities like person, location and organization were extracted from social media text. The challenge in the proposed system was to classify the ambiguous words and entities. SVM-Light, a well known SVM based classifier is used in our system to train our entity extraction model. The performance of the system is evaluated in terms of standard performance metrics such as precision, recall and f-measure. The accuracy is measured using 10-fold validation. The evaluation results clearly depicts that the proposed system is good for unknown tokens than known and ambiguous tokens. The overall accuracy for the proposed system is 89.83% which evidently proves that the proposed system is better than existing techniques of entity extraction for Malayalam. This work can be extended by using other embedding and with more unlabeled data.

(7)

References

1. L. Wang, D. Chris, B. Alan, and T. Isabel, “Two/too simple adaptations of word2vec for syntax problems.”

2. S. P. Sanjay, M. Anand Kumar, and K. P. Soman, “Amrita cen-nlp@fire 2015:crf based named entity extraction for twitter microposts,” in

CEUR Workshop Proceedings,1587, pp. 96–99.

3. S. Sanjay, A. V. Networking, and K. Soman, “Amrita cen-nlp@ fire 2015: Crf based named entity extration for twitter microposts.” 4. R. Vijayakrishna, Devi, and L. Sobha, “Domain focused named entity recognizer for tamil using conditional random fields.” in IJCNLP,

2008, pp. 59–66.

5. Sarkar and Kamal, “A hidden markov model based system for entity extraction from social media english text at fire 2015,” arXiv preprint

arXiv:1512.03950, 2015.

6. S. Shriya and K. P. Soman, “Amrita cen@ fire 2015: Extracting entities for social media texts in indian languages.”

7. Saha, S. Kumar, Chatterji, Sanjay, Dandapat, Sandipan, Sarkar, Sudeshna, Mitra, and Pabitra, “A hybrid approach for named entity recog-nition in indian languages,” in Proceedings of the IJCNLP-08 Workshop on NER for South and South East Asian Languages, 2008, pp. 17–24.

8. Pasupat, Panupong, Liang, and Percy, “Zero-shot entity extraction from web pages.” in ACL (1), 2014, pp. 391–401.

9. N. Abinaya, M. Anand Kumar, and K. P. Soman, “Randomized kernel approach for named entity recognition in tamil (2015),” Indian Journal

of Science and Technology, vol. 8(24), p. 7.

10. Etzioni, Oren, Cafarella, Michael, Downey, Doug, Popescu, Ana-Maria, Shaked, Tal, Soderland, Stephen, Weld, S. Daniel, Yates, and Alexander, “Unsupervised named-entity extraction from the web: An experimental study,” Artificial intelligence, vol. 165, no. 1, pp. 91–134, 2005.

11. M. Anand Kumar, S. Shriya, and K. P. Soman, “Amrita cen @ fire 2015: Extracting entities for social media texts in indian languages,” CEUR

Workshop Proceedings, 1587, vol. 7, pp. 85–86.

12. Rao, R. K. Pattabhi, C. S. Malarkodi, Devi, and L. Sobha, “Esm-il: Entity extraction from social media text for indian languages@ fire 2015–an overview.”

13. V. Abeera, S. Aparna, R. Rekha, M. Anand kumar, V. Dhanalakshmi, K. P. Soman, and S. Rajendran, “Morphological analyzer for malayalam using machine learning (2012),” vol. 6411 LNCS, pp. 252–254.

14. Joachims and Thorsten, “Svmlight: Support vector machine,” SVM-Light Support Vector Machine http://svmlight. joachims. org/, University

of Dortmund, vol. 19, no. 4, 1999.

15. N. Abinaya, N. John, H. B. Barathi Ganesh, M. Anand Kumar, and K. P. Soman, “Amrita-cen@fire-2014: Named entity recognition for indian languages using rich features (2014),” ACM International Conference Proceeding Series, pp. 103–111, 05-07-Dec-2014.

16. H. B. Barathi Ganesh, N. Abinaya, M. Anand Kumar, R. Vinayakumar, and K. P. Soman, “Amrita - cen@neel : Identification and linking of twitter entities(2015),” in CEUR Workshop Proceedings, 1395, pp. 64–65.

References

Related documents

Daarom is daar voortdurende beweging in hierdie navorsing – vanaf die eerste bedryf na die laaste en vyfde bedryf – wat nie net by die opera-metafoor aansluit nie, maar ook op

The 2016 Kumamoto?Oita earthquake sequence aftershock seismicity gap and dynamic triggering in volcanic areas Uchide et al Earth, Planets and Space (2016) 68 180 DOI 10 1186/s40623 016

In our small series of patients treated for ruptured or unrup- tured cerebral aneurysms with limited follow-up angiography, a reduced incidence of recanalization was observed after

Abbreviations: NA, N-acetylaspartate and other N-acetyl containing compounds; Cho, choline, phosphocholine and other choline-containing compounds; Cr, creatine and

companies and the government, while Cradle to Cradle is purely a private voluntary initiative. Therefore in this explorative research also other potential success factors for Cradle

Nutritional Behavior : The dietary information related food habits, food likes and dislikes, junk. food consumption and frequency of food intake were collected from the

The information included comorbidity, mechanism of injury, fracture type, associated injury, operative time, bleeding amount (including operative blood loss and wound

The primary aims of the current study were: i) to dem- onstrate proof of concept, that an accelerometer-based physical activity monitor (the SMA) could be used to provide