Top PDF Unsupervised Keyphrase Extraction for Web Pages

Unsupervised Keyphrase Extraction for Web Pages

Unsupervised Keyphrase Extraction for Web Pages

As the resulting cosine similarity scores can be negative, the similarity scores for all candidates are normalized between 0 and 1 before the weighting. The bonus weight is determined based on the elements a phrase occurs in. As a phrase that appears in multiple key elements is likely to be more important than one that only occurs in one, we propose a cumulative weighting system. This entails that, for each candidate phrase, the bonus weight is determined as the sum of all weights of the elements the phrase occurs in. Earlier work has shown that the hostname can be a strong descriptor for a web page [ 22 , 23 ], and often consists of the most relevant words for the page such as the brand name, or the main product or service it provides. We therefore included a weight to increase the score for terms that occurred in the hostname. We also included a small penalty for long phrases. As the document embedding is essentially a mean of the word vectors in the documents, a phrase embedding has a tendency to move more towards the mean of all word vectors as the number of words in the phrase increases. For this reason, we noticed that EmbedRank had a tendency to generate many long phrases, which may also have been a cause for the lack of diversity described in the EmbedRank paper [ 5 ]. By adding a slight penalty to long phrases, we bias the ranking by preferring shorter phrases, while still allowing longer phrases to be selected if the added words significantly contribute to the relevance of the phrase. The proposed weights were determined based on an exhaustive grid search on our manually annotated dataset, and can be found in Table 1 . To illustrate how this bonus weight is calculated, consider a phrase that consists of two words and appears in the title and the description, but not in hostname or headings. In this case, the bonus weight is 1.5 + 0.5 − 0.25 × ( 2 − 1 ) = 1.75. The scoring S ( p i ) is calculated as follows, where sim ( p i , d ) is the cosine similarity between a candidate
Show more

13 Read more

Simple Unsupervised Keyphrase Extraction using Sentence Embeddings

Simple Unsupervised Keyphrase Extraction using Sentence Embeddings

Document keywords and keyphrases enable faster and more accurate search in large text collections, serve as condensed document summaries, and are used for various other appli- cations, such as categorization of documents. In particular, keyphrase extraction is a crucial component when gleaning real-time insights from large amounts of Web and social me- dia data, which is a now routine task in companies across the world. In this case, it is essential for the extraction to be fast and for the keyphrases to be disjoint. However, exist- ing systems are complex and slow, and are plagued by over- generation, i.e. extracting redundant keyphrases (e.g., ”Euro- pean Commission” and ”Commission”).
Show more

8 Read more

Unsupervised Extraction of Popular Product Attributes from Web Sites

Unsupervised Extraction of Popular Product Attributes from Web Sites

For example, in the digital camera domain, users mainly consider the feature “picture quality”, which is related to several product attributes including “reso- lution”, “ISO”, etc. Some other features such as “supported operating systems” may constitute tiny influence on users’ decision making. Existing information extraction methods cannot automatically identify the product attributes that are of the users’ interest. Moreover, these kinds of attributes are usually un- known in different domains. As a result, it raises the need for a method that can automatically identify the product attributes that are of users’ interests and extract these attributes from Web pages.
Show more

10 Read more

Mining the Web for Lexical Knowledge to Improve Keyphrase Extraction: Learning from Labeled and Unlabeled Data.

Mining the Web for Lexical Knowledge to Improve Keyphrase Extraction: Learning from Labeled and Unlabeled Data.

A journal article is often accompanied by a list of keyphrases, composed of about five to fifteen important words and phrases that capture the article’s main topics. Keyphrases are useful for a variety of purposes, including summarizing, indexing, labeling, categorizing, clustering, high- lighting, browsing, and searching. The task of automatic keyphrase extraction is to select keyphrases from within the text of a given document. Automatic keyphrase extraction makes it feasible to generate keyphrases for the huge number of documents that do not have manually assigned keyphrases. Good performance on this task has been obtained by approaching it as a supervised learning problem. An input document is treated as a set of candidate phrases that must be classified as either keyphrases or non-keyphrases. To classify a candidate phrase as a key- phrase, the most important features (attributes) appear to be the frequency and location of the candidate phrase in the document. Recent work has demonstrated that it is also useful to know the frequency of the candidate phrase as a manually assigned keyphrase for other documents in the same domain as the given document (e.g., the domain of computer science). Unfortunately, this keyphrase-frequency feature is domain-specific (the learning process must be repeated for each new domain) and training-intensive (good performance requires a relatively large number of train- ing documents in the given domain, with manually assigned keyphrases). The aim of the work described here is to remove these limitations. In this paper, I introduce new features that are con- ceptually related to keyphrase-frequency and I present experiments that show that the new features result in improved keyphrase extraction, although they are neither domain-specific nor training- intensive. The new features are generated by issuing queries to a Web search engine, based on the candidate phrases in the input document. The feature values are calculated from the number of hits for the queries (the number of matching Web pages). In essence, these new features are derived by mining lexical knowledge from a very large collection of unlabeled data, consisting of approx- imately 350 million Web pages without manually assigned keyphrases.
Show more

36 Read more

Learning algorithms for keyphrase extraction

Learning algorithms for keyphrase extraction

& Brain Sciences Preprint Archive), one is about the hotel industry (Journal of the Interna- tional Academy of Hospitality Research), and one is about chemistry (Journal of Computer- Aided Molecular Design). The full text of every article is available on the web. The authors supplied keyphrases for their articles. For the Email Messages corpus, we collected 311 email messages from six NRC employees. A university student created the keyphrases for the messages. For the Aliweb corpus, we collected 90 web pages using the Aliweb search engine, a public search engine provided by NEXOR. 6 Aliweb has web-based fill-in form, with a field for keyphrases, where people may enter URLs to add to Aliweb. The keyphrases are stored in the Aliweb index, along with the URLs. For the NASA corpus, we collected 141 web pages from NASA’s Langley Research Center. 7 Each page includes a list of keyphrases. For the FIPS corpus, we gathered 35 web pages from the US government’s Federal Informa- tion Processing Standards (FIPS). 8 Each document includes a list of keyphrases.
Show more

46 Read more

Coherent Keyphrase Extraction via Web Mining

Coherent Keyphrase Extraction via Web Mining

A journal article is often accompanied by a list of keyphrases, composed of about five to fifteen important words and phrases that express the primary topics and themes of the paper. For an individual document, keyphrases can serve as a highly condensed summary, they can supplement or replace the title as a label for the document, or they can be highlighted within the body of the text, to facilitate speed reading (skimming). For a collection of documents, keyphrases can be used for indexing, categorizing (classifying), clustering, browsing, or searching. Keyphrases are most familiar in the context of journal articles, but many other types of documents could benefit from the use of keyphrases, including web pages, email messages, news reports, magazine articles, and business papers.
Show more

6 Read more

Automated Data Extraction for Unsupervised Web Documents

Automated Data Extraction for Unsupervised Web Documents

Extracting data from the web is a process in the field of data extraction. Internet pages in html, xml, etc are considered an unstructured data source due to the wide variety in the code, styles, and of course exceptions and violations of standard coding practices. Due to this variety, extracting data from the web is a highly customizable process depending on the specific source of information one is trying to retrieve. Web data extraction involves the process of extracting web pages of text-based markup language such as HTML and XHTML which usually contain a useful information. Thus data extraction is taking an unstructured form of data and parsing that information into a structured data set. Data mining can be defined as a process of extracting patterns and data from the internet. This process is quite important as data mining is increasingly becoming popular tool in creating models and decision making. Many web data extractors rely on extraction rules, which can be classified into ad-hoc or built-in rules. The costs involved in hand crafting ad-hoc rules motivated many researchers to work on proposals to learn them automatically using supervised techniques, i.e., techniques that require the user to provide samples of the data to be extracted, aka annotations or using unsupervised techniques, i.e., techniques that learn rules that extract as much prospective data as they can, and the user then gathers the relevant data from the results. Web data extractors that rely on built-in rules are based on a collection of heuristic rules that have proven to work well on many typical web documents.
Show more

6 Read more

Conundrums in Unsupervised Keyphrase Extraction: Making Sense of the State of the Art

Conundrums in Unsupervised Keyphrase Extraction: Making Sense of the State of the Art

The keyphrases for a given document refer to a group of phrases that represent the document. Al- though we often come across texts from different domains such as scientific papers, news articles and blogs, which are labeled with keyphrases by the authors, a large portion of the Web content re- mains untagged. While keyphrases are excellent means for providing a concise summary of a doc- ument, recent research results have suggested that the task of automatically identifying keyphrases from a document is by no means trivial. Re- searchers have explored both supervised and un- supervised techniques to address the problem of automatic keyphrase extraction. Supervised meth- ods typically recast this problem as a binary clas- sification task, where a model is trained on anno- tated data to determine whether a given phrase is a keyphrase or not (e.g., Frank et al. (1999), Tur- ney (2000; 2003), Hulth (2003), Medelyan et al. (2009)). A disadvantage of supervised approaches
Show more

9 Read more

Mining the Web for Lexical Knowledge to Improve Keyphrase Extraction: Learning from Labeled and Unlabeled Data.

Mining the Web for Lexical Knowledge to Improve Keyphrase Extraction: Learning from Labeled and Unlabeled Data.

A journal article is often accompanied by a list of keyphrases, composed of about five to fifteen important words and phrases that capture the article’s main topics. Keyphrases are use- ful for a variety of purposes, including summarizing, indexing, labeling, categorizing, clustering, highlighting, browsing, and searching. The task of automatic keyphrase extrac- tion is to select keyphrases from within the text of a given document. Automatic keyphrase extraction makes it feasible to generate keyphrases for the huge number of documents that do not have manually assigned keyphrases. Good performance on this task has been obtained by approaching it as a supervised learning problem. An input document is treated as a set of candidate phrases that must be classified as either keyphrases or non-keyphrases. To classify a candidate phrase as a keyphrase, the most important features (attributes) appear to be the frequency and location of the candidate phrase in the document. Recent work has demonstrated that it is also useful to know the frequency of the candidate phrase as a manually assigned keyphrase for other documents in the same domain as the given doc- ument (e.g., the domain of computer science). Unfortunately, this keyphrase-frequency feature is domain-specific (the learning process must be repeated for each new domain) and training-intensive (good performance requires a relatively large number of training docu- ments in the given domain, with manually assigned keyphrases). The aim of the work described here is to remove these limitations. In this paper, I introduce new features that are conceptually related to keyphrase-frequency and I present experiments that show that the new features result in improved keyphrase extraction, although they are neither domain- specific nor training-intensive. The new features are generated by issuing queries to a Web search engine, based on the candidate phrases in the input document. The feature values are calculated from the number of hits for the queries (the number of matching Web pages). In essence, these new features are derived by mining lexical knowledge from a very large col- lection of unlabeled data, consisting of approximately 350 million Web pages without manually assigned keyphrases.
Show more

34 Read more

Keyword and Keyphrase Extraction Techniques: A Literature Review

Keyword and Keyphrase Extraction Techniques: A Literature Review

Now a days we need to quickly go through large amounts of textual information to find out documents related to our interests and this document space is growing on a daily basis at an overwhelming rate. Now days it is common to store several million web-pages and hundreds of thousands of text files. Analyzing such huge quantities of data can be made easier if we can have a subset of words (Keywords) which can provide us with the main features, concept, theme etc of the document. Appropriate keywords can serve as a highly concise summary of a document and help us in easily organize documents and retrieve them based on their content. Keywords are used in academic articles to give an idea to the reader about the content of the article. In a textbook they are useful for the readers to identify and retain the main points in their mind about a particular section. As keywords represent the main theme of a text, they can be used as a measure of similarity for text clustering.
Show more

6 Read more

Simple Unsupervised Keyphrase Extraction using Sentence Embeddings

Simple Unsupervised Keyphrase Extraction using Sentence Embeddings

Keyphrase extraction is the task of automat- ically selecting a small set of phrases that best describe a given free text document. Su- pervised keyphrase extraction requires large amounts of labeled training data and gener- alizes very poorly outside the domain of the training data. At the same time, unsuper- vised systems have poor accuracy, and often do not generalize well, as they require the in- put document to belong to a larger corpus also given as input. Addressing these drawbacks, in this paper, we tackle keyphrase extrac- tion from single documents with EmbedRank: a novel unsupervised method, that leverages sentence embeddings. EmbedRank achieves higher F-scores than graph-based state of the art systems on standard datasets and is suit- able for real-time processing of large amounts of Web data. With EmbedRank, we also ex- plicitly increase coverage and diversity among the selected keyphrases by introducing an embedding-based maximal marginal relevance (MMR) for new phrases. A user study includ- ing over 200 votes showed that, although re- ducing the phrases’ semantic overlap leads to no gains in F-score, our high diversity selec- tion is preferred by humans.
Show more

9 Read more

PositionRank: An Unsupervised Approach to Keyphrase Extraction from Scholarly Documents

PositionRank: An Unsupervised Approach to Keyphrase Extraction from Scholarly Documents

For example, in a scholarly domain, keyphrases generally occur on positions very close to the be- ginning of a document and occur frequently. Fig- ure 1 shows an anecdotal example illustrating this behavior using the 2010 best paper award win- ner in the World Wide Web conference. The au- thor input keyphrases are marked with red bold in the figure. Notice in this example the high fre- quency of the keyphrase “Markov chain” that oc- curs very early in the document (even from its ti- tle). Hence, can we design an effective unsuper- vised approach to keyphrase extraction by jointly exploiting words’ position information and their frequency in documents? We specifically address this question using research papers as a case study. The result of this extraction task will aid indexing of documents in digital libraries, and hence, will lead to improved organization, search, retrieval, and recommendation of scientific documents. The importance of keyphrase extraction from research papers is also emphasized by the SemEval Shared Tasks on this topic from 2017 1 and 2010 (Kim
Show more

11 Read more

TeKET: a Tree-Based Unsupervised Keyphrase Extraction Technique

TeKET: a Tree-Based Unsupervised Keyphrase Extraction Technique

Here, the core idea is to build a graph from an input document and to rank its nodes according to their importance [ 8 ]. For instance, KeyGraph [ 45 ] is a similar technique which is content sensitive and domain- independent and utilizes co-occurrence of various terms for indexing vertices of the graph. However, it fails to detect the relationships among the low-frequency items inside clusters and also ignores direct relationships between the clusters [ 71 ]. On the other hand, PageRank [ 46 ] is based on the concept of random walks and is related to eigenvector centrality that tends to favor nodes with many important connections regardless of cohesiveness considerations. This technique is well suited for raking pages on the web and social networks, but not suitable for keyphrase extraction due to lack of consideration of cohesiveness [ 44 , 72 ].
Show more

23 Read more

URES : an Unsupervised Web Relation Extraction System

URES : an Unsupervised Web Relation Extraction System

KnowItAll system is a direct predecessor of URES. It is developed at University of Washing- ton by Oren Etzioni and colleagues (Etzioni, Cafarella et al. 2005). KnowItAll is an autono- mous, domain-independent system that extracts facts from the Web. The primary focus of the system is on extracting entities (unary predi- cates). The input to KnowItAll is a set of entity classes to be extracted, such as “city”, “scien- tist”, “movie”, etc., and the output is a list of entities extracted from the Web. KnowItAll uses a set of manually-built generic rules, which are instantiated with the target predicate names, pro- ducing queries, patterns and discriminator phrases. The queries are passed to a search en- gine, the suggested pages are downloaded and processed with patterns. Every time a pattern is matched, the extraction is generated and evalu- ated using Web statistics – the number of search engine hits of the extraction alone and the ex- traction together with discriminator phrases. KnowItAll has also a pattern learning module (PL) that is able to learn patterns for extracting entities. However, it is unsuitable for learning patterns for relations. Hence, for extracting rela- tions KnowItAll currently uses only the generic hand written patterns.
Show more

8 Read more

Bootstrapping Information Extraction from Semi-structured Web Pages

Bootstrapping Information Extraction from Semi-structured Web Pages

field detection is done in an unsupervised manner as follows. First, document object model (DOM) trees for the detail pages on a site are aligned. Next, the strings occurring at aligned nodes are enumerated and inserted into a set. Any of these sets containing more than one element, then, corresponds to a region in the “template” for detail pages that exhibits variability across pages. Such regions (aligned DOM tree nodes exhibiting variability) are taken as the candidate data fields for the site. This process is illustrated in Fig. 3. Data fields that aligned to fewer than half of the pages on a site were filtered out, because these were typically not interesting data fields. Such filtered data fields tended to be different paragraphs of free text which the tree alignment algorithm occasionally decided to align. In effect, this filtering step ignores free text data fields which do not occur on all pages of the site and are not interesting data fields.
Show more

16 Read more

Advances in Automatic Keyphrase Extraction

Advances in Automatic Keyphrase Extraction

For this reason, we asked mother tongue speakers of these three languages to collect 20 documents per language and assign 15 key- phrases to each document, ranking them by importance. To have a further verification of our techniques, we repeated the same pro- cess for 20 documents in the English language as well. The collected datasets are described in Table 2 . 5 , in which we show the mean num- ber of words and the standard deviation of the number of words of the documents of each dataset. For English, Italian and Portuguese, we have collected similar datasets with a majority of scientific docu- ments, which is reflected by mean of about 4000 words per document. For Romanian, we collected mainly newswire documents, so we have a mean of 800 words per document, close to the AKEC dataset. All the purpose-built datasets have a greater variability in the number of words with respect to the SEMEVAL 2010 and AKEC datasets, be- cause while these datasets are composed by only one kind of doc- uments with strict constraints on the length, our test datasets are mixed, containing scientific papers, newswire text, web pages, etc.
Show more

137 Read more

Open Domain Web Keyphrase Extraction Beyond Language Modeling

Open Domain Web Keyphrase Extraction Beyond Language Modeling

This paper focuses on the task of open domain web keyphrase extraction, which targets KPE for web documents without any restriction of the domain, quality, nor content of the documents. We curate and release a large scale open do- main KPE dataset, OpenKP, which includes about one hundred thousand web documents with ex- pert keyphrase annotations. 1 The web documents are randomly sampled from the English fraction of a large web corpus and reflect the characteris- tics of typical web pages, with large variation in their domains and content qualities. To the best of our knowledge, this will be the first publicly avail- able open domain manually annotated keyphrase extraction dataset at this scale.
Show more

10 Read more

Approach Research of Keyword Extraction Based on Web Pages Document

Approach Research of Keyword Extraction Based on Web Pages Document

To accomplish the experiment design, a data set for web pages should be used. DMOZ data set (Open Directory) is used for this purpose. DMOZ or The Open Directory Project is the largest, most comprehensive human edited directory of the Web. It is constructed and maintained by a global community of volunteer editors. The Open Directory was founded in the spirit of the Open Source movement, and is the only major directory that is 100% free. The data set used consists of 50 Web pages of shopping domain extracted from DMOZ dataset.
Show more

5 Read more

Automatic Data Extraction From Template Generated Web Pages

Automatic Data Extraction From Template Generated Web Pages

Another issue is handling images with descriptive functions in the web pages. It is not good to simply remove any image from the page, as we all know that a picture sometimes worth one thousand words. For example, fashion site like elle.com prefers image text rather than a plain text. Image text certainly facilitates readers’ understanding; on the other hand it prohibits data extraction. Recently Google enables the search for images by the image name. From our observation, the “ALT” parameter of an image link sometimes also describes the image content, if edited by a human editor. We include both image name and “ALT” value in our data extraction process, while getting rid of worthless values such as “img9”, “spacer” or “graphic”. We are working on evaluating the effect of including informational image name and image name parsing rules.
Show more

7 Read more

Topical Keyphrase Extraction from Twitter

Topical Keyphrase Extraction from Twitter

So far there is little work on keyword or keyphrase extraction from Twitter. Wu et al. (2010) proposed to automatically generate personalized tags for Twit- ter users. However, user-level tags may not be suit- able to summarize the overall Twitter content within a certain period and/or from a certain group of peo- ple such as people in the same region. Existing work on keyphrase extraction identifies keyphrases from either individual documents or an entire text collec- tion (Turney, 2000; Tomokiyo and Hurst, 2003). These approaches are not immediately applicable to Twitter because it does not make sense to ex- tract keyphrases from a single tweet, and if we ex- tract keyphrases from a whole tweet collection we will mix a diverse range of topics together, which makes it difficult for users to follow the extracted keyphrases.
Show more

10 Read more

Show all 10000 documents...