• No results found

Feature Selection for Fluency Ranking

N/A
N/A
Protected

Academic year: 2020

Share "Feature Selection for Fluency Ranking"

Copied!
9
0
0

Loading.... (view fulltext now)

Full text

Loading

Figure

Figure 1: Partial derivation tree for the noun phrase deadviezen (the advices).
Table 1 shows the peak accuracies when select-
Table 1:Peak accuracies for the maximum entropy,correlation-based, and frequency-based selection meth-ods when selecting up to 5000 features
Table 2: The first 10 features returned by maximum en-tropy feature selection, including the weights estimatedby this feature selection method.

References

Related documents

In this paper, we introduce a parallel version of the well-known AdaBoost al- gorithm to speed up and size up feature selection for binary clas- sification tasks using large

Chapter 3 proposes an unsupervised feature selection method for high-dimensional data (called AUFS). To overcome traditional unsupervised feature selection methods, we proposed

One can observe that the proposed method achieved higher validation accuracy (0.81) than all other compared feature selection approaches, while using the lowest number of features

Finally, we compared the performance of HMC-ReliefF with a feature ranking algorithm based on binary relevance – a method typically used for solving the task of feature ranking

From the experimental results, it can be seen that, when compared with fuzzy entropy based feature selection and the raw data sets, the proposed method can select feature subset

Twitter, Tweet Ranking, Weighted Borda-Count, Content-based Features, Reputation, Baldwin method, Greedy Feature

In what follows, various existing feature selection methods will be described to be compared (Compared Feature Selection Methods section), our proposed improved method

Filter feature selection methods present different ranking algorithms; therefore, we propose an EMFFS method that combines the output of IG, gain ratio, chi-squared and ReliefF to