feature importance

Top PDF feature importance:

Empirical learning aided by weak domain knowledge in the form of feature importance

Empirical learning aided by weak domain knowledge in the form of feature importance

We have proposed a simple and efficient procedure of incorporating feature importance into neural network learning. The performance of such a learner shows feature importance aided learners can achieve superior performance over ordinary empirical learners and can even compare to stronger knowledge based learners without having the extra cost of a deep domain theory. This approach of incorporating feature importance into learners is worthy of further development. Possible future applications maybe areas where expert knowledge is not readily available and there is a scarcity of training data as well. IANN can be used in such domains with both fewer training data and expert knowledge. Furthermore, Modifications of existing popular empirical learners should also be developed that utilize feature importance. Currently it is assumed that feature importance knowledge provided by experts is almost correct to some extent. However, if its accuracy is questionable then performance will actually degrade. So, a learning algorithm can be developed that can correct FRI knowledge through training examples.

9 Read more

Many Faces of Feature Importance: Comparing Built in and Post hoc Feature Importance in Text Classification

Many Faces of Feature Importance: Comparing Built in and Post hoc Feature Importance in Text Classification

Feature importance is commonly used to ex- plain machine predictions. While feature im- portance can be derived from a machine learn- ing model with a variety of methods, the con- sistency of feature importance via different methods remains understudied. In this work, we systematically compare feature importance from built-in mechanisms in a model such as attention values and post-hoc methods that ap- proximate model behavior such as LIME. Us- ing text classification as a testbed, we find that 1) no matter which method we use, impor- tant features from traditional models such as SVM and XGBoost are more similar with each other, than with deep learning models; 2) post- hoc methods tend to generate more similar im- portant features for two models than built-in methods. We further demonstrate how such similarity varies across instances. Notably, im- portant features do not always resemble each other better when two models agree on the pre- dicted label than when they disagree.

10 Read more

Classifier-Based Evaluation of Image Feature Importance

Classifier-Based Evaluation of Image Feature Importance

Significant advances in the performance of deep neural networks, such as Convolutional Neural Networks (CNNs) for image classification, have created a drive for understanding how they work. Different techniques have been proposed to determine which features (e.g., image pixels) are most important for a CNN’s classification. However, the important fea- tures output by these techniques have typically been judged subjectively by a human to assess whether the important features capture the features relevant to the classification and not whether the features were actually important to classifier itself. We address the need for an objective measure to assess the quality of different feature importance measures. In particular, we propose measuring the ratio of a CNN’s accuracy on the whole image com- pared to an image containing only the important features. We also consider scaling this ratio by the relative size of the important region in order to measure the conciseness. We demonstrate that our measures correlate well with prior subjective comparisons of impor- tant features, but importantly do not require their human studies. We also demonstrate that the features on which multiple techniques agree are important have a higher impact on accuracy than those features that only one technique finds.

14 Read more

Interpretation of Neural Networks Is Fragile

Interpretation of Neural Networks Is Fragile

Random sign perturbation As a baseline, each pixel is randomly perturbed by ±. This is used as a baseline with which to compare our adversarial perturbations against both feature importance and sample importance methods. Iterative attacks against feature importance methods In Algorithm 1, we define three adversarial attacks against feature importance methods, each of which consists of tak- ing a series of steps in the direction that maximizes a dif- ferentiable dissimilarity function between the original and perturbed interpretation. (1) The top-k attack seeks to per- turb the feature importance map by decreasing the relative importance of the k initially most important input features. (2) For image data, feature importance map’s center of mass often captures the user’s attention. The mass-center attack is designed to result in the maximum spatial displacement of the center of mass. (3) If the goal is to have a semantically meaningful change in feature importance map, targeted at- tack aims to increase the concentration of feature importance scores in a pre-defined region of the input image.

8 Read more

Machine learning classifiers provide insight into the relationship between microbial communities and bacterial vaginosis

Machine learning classifiers provide insight into the relationship between microbial communities and bacterial vaginosis

Background: Bacterial vaginosis (BV) is a disease associated with the vagina microbiome. It is highly prevalent and is characterized by symptoms including odor, discharge and irritation. No single microbe has been found to cause BV. In this paper we use random forests and logistic regression classifiers to model the relationship between the microbial community and BV. We use subsets of the microbial community features in order to determine which features are important to the classification models. Results: We find that models generated using logistic regression and random forests perform nearly identically and identify largely similar important features. Only a few features are necessary to obtain high BV classification accuracy. Additionally, there appears to be substantial redundancy between the microbial community features. Conclusions: These results are in contrast to a previous study in which the important features identified by the classifiers were dissimilar. This difference appears to be the result of using different feature importance measures. It is not clear whether machine learning classifiers are capturing patterns different from simple correlations.

9 Read more

Casalicchio, Giuseppe
  

(2019):


	On benchmark experiments and visualization methods for the evaluation and interpretation of machine learning models.


Dissertation, LMU München: Fakultät für Mathematik, Informatik und Statistik

Casalicchio, Giuseppe (2019): On benchmark experiments and visualization methods for the evaluation and interpretation of machine learning models. Dissertation, LMU München: Fakultät für Mathematik, Informatik und Statistik

An interpretability method ideally provides explanations that are appropriate for both the under- lying application and the intended interpretation goal of the practitioner. Thus, the first step of the practitioner is to formulate the interpretation goal that he or she is most likely interested in. From a model-agnostic perspective, this is closely related to asking the question: Which quantity of interest that can be derived from a model deserves an explanation? Any quantity that can be derived from a model may be easier to understand for humans if it is possible to break it down into the individual contribution of each feature (or a set of features) to that quantity. The same objective has already been addressed by many interpretability methods, as these methods often decompose a certain quantity of interest into parts that are attributable to each feature (or set of features). In general, there are three frequently used quantities of interest in the literature that are often decomposed, namely 1) the model predictions (i.e., through feature effect methods), 2) the uncertainty or variability of the model predictions (i.e., through variance-based feature importance methods), or 3) the model performance (i.e., through performance-based feature im- portance methods). Specifically, many interpretability methods also often quantify or visualize how changes (or permutations) of one feature (or a set of features) affect the aforementioned quantities of interest. A similar categorization of interpretability methods has been described in Jiang and Owen (2002), Wei et al. (2015), Zhao and Hastie (2017), and Guidotti et al. (2018). Figure 2.5 displays a general ontology. In its first dimension, it organizes common interpretabil- ity methods into the three quantities of interest mentioned above (i.e., feature effect methods, variance-based feature importance methods, and performance-based feature importance meth- ods). The second dimension of the ontology refers to the scope

134 Read more

Wavelet decompositions of Random Forests - smoothness analysis, sparse approximation and applications

Wavelet decompositions of Random Forests - smoothness analysis, sparse approximation and applications

In this paper we introduce, in the setting of machine learning, a generalization of wavelet analysis which is a popular approach to low dimensional structured signal analysis. The wavelet decomposition of a Random Forest provides a sparse approximation of any regres- sion or classification high dimensional function at various levels of detail, with a concrete ordering of the Random Forest nodes: from ‘significant’ elements to nodes capturing only ‘insignificant’ noise. Motivated by function space theory, we use the wavelet decomposi- tion to compute numerically a ‘weak-type’ smoothness index that captures the complexity of the underlying function. As we show through extensive experimentation, this sparse representation facilitates a variety of applications such as improved regression for difficult datasets, a novel approach to feature importance, resilience to noisy or irrelevant features, compression of ensembles, etc.

38 Read more

Using Feature Weights to Improve Performance of Neural Networks

Using Feature Weights to Improve Performance of Neural Networks

We have proposed a well performed approach of incorporating feature importance into neural network learning. The performance of such a learner shows feature importance aided learners can achieve superior performance over ordinary inductive learners. Removing irrelevant features by feature selection is a good approach, however Expert knowledge is available in some domains or correlation of same features could be calculated from a different problem dataset as well. This extra knowledge could be transferred to CANN to attain higher performance. This approach of incorporating feature importance into learners is worthy of further development. Possible future applications of this algorithm will be areas where related machine learning problems are being solved or where expert knowledge is available. The future research areas can be modifications of existing popular empirical learners so that they utilize feature importance. Correlation coefficient aided algorithms maybe developed for algorithms such as Support Vector Machines; Decision Tree based algorithms or Bayesian classifiers.

6 Read more

A Study of the Importance of External Knowledge in the Named Entity Recognition Task

A Study of the Importance of External Knowledge in the Named Entity Recognition Task

Frequent mention tokens. Reflects the fre- quency of a given token in a list of entity names. We tokenized the list and computed frequencies. The feature assigns a weight to each token in the text corresponding to their normalized frequency. High weights should be assigned to tokens that in- dicate named entities. For instance, the top-5 to- kens we found in English were “county”, “john”, “school”, “station” and “district”. All tokens with- out occurrences are assigned 0 weight.

6 Read more

Attribute Evaluation of Dataset Using Chi-Square Test in Rapid Miner Studio

Attribute Evaluation of Dataset Using Chi-Square Test in Rapid Miner Studio

The relationship between each and every feature define the strength of the model in the prediction scenario. The ratio between the features and the importance of the modelling helped to design this work and we are happy to present you the feature analysis using the chi- square analysis and the predictions are performed using the P and SL values in the backward elimination process and the process of implementation was explained clearly in the proposed work section of this article. We conclude that chi-square analysis is the best one to be followed and all the symbols and the information provided to the model in the form of features are furthered check with the importance.

9 Read more

Determining Urban Emotions: A Case Study around Majitar, East District, Sikkim

Determining Urban Emotions: A Case Study around Majitar, East District, Sikkim

implementation for the betterment of citizens and humanity at large. From the results of a supervised learning approach K- Nearest Neighbor Classifier algorithm, it is observe that the classifier finds out 10 important featured attribute in each facility which will effect in urban planning, development and urbanization at Majitar. More focus should be provided to featured attribute which is considered as most important by the classifier when the number of neighbors is k=3. By analyzing the results, we get featured attributes which has more importance will effect in urban development and those featured attributes are said to have the best response than the other attributes of each facilities. Development should be made to those featured attributes in basis of their importance in urban planning and development. From this study, we can conclude that people of Majitar have demanded improvement as well as development in each facility. They have suggested many features which are need to be included at various facilities available at this locality. Thus, Urban Emotions can act as new information layers within planning processes and determining it using a supervised learning approach will helps in understanding urbanization and in urban planning. In future, it will help in implementing the concept of ‘Smart City’ at Majitar, which will lead to better sustainability.

8 Read more

Review on Vision based Human Activity Analysis

Review on Vision based Human Activity Analysis

adapted is that they have a trained set of data and compare the incoming activity to be recognized with the trained data. It is based on the training set we identify the activity from the input activity source. In order to train we need a quantized set of data/feature representation to identify the activity uniquely. And so the most crucial point in identifying human activity lies in feature extraction and its representation accurately. The more accurate the feature representation the more accurate will be the activity recognition. Most of the above papers focus on dealing with extracting Spatio-temporal features since they provide descriptors based on both space and time combined and provides accurate descriptors [7-8, 12] of features. Mostly adopted methods for HAR classification is by multi-level SVM, HMM or ANN. Even though there are proposals for recognition rate above 95%, most of them have disadvantages of recognizing any particular activity instead of another or depends on recognition in specific environment. Feature Extraction and Representation Recognition Dataset Accuracy

6 Read more

Content Importance Models for Scoring Writing From Sources

Content Importance Models for Scoring Writing From Sources

Our results are related to the findings of Gure- vich and Deane (2007) who studied the difference between the reading and the lecture in their im- pact on essay scores for this test. Using data from a single prompt, they showed that the difference between the essay’s average cosine similarity to the reading and its average cosine similarity to the lecture is predictive of the score for non-native speakers of English, thus using a model similar to LectVsRead, although they took all lecture, reading, and essay words into account, in contrast to our model that looks only at n-grams that ap- pear in the lecture. Our study shows that the ef- fectiveness of lecture-reading contrast models for essay scoring generalizes to a large set of prompts. Similarly, Evanini et al. (2013) found that over- lap with material that is unique to the lecture (not shared with the reading) was predictive of scores in a spoken source-based question answering task. In the vast literature on summarization, our work is closest to Hong and Nenkova (2014) who studied models of word importance for multi- document summarization of news. The Prob, Po- sition, and Good models are inspired by their findings of the effectiveness of similar models in their setting. We found that, in our setting, Prob and Good models performed worse than assigning a uniform weight to all words. We note, however, that models from Hong and Nenkova (2014) are not strictly comparable, since their word proba- bility models were calculated after stopword ex- clusion, and their model that inspired our Good model was defined somewhat differently and val- idated using content words only. The defini- tion of our Position model and its use in the es- say scoring function S (equation 2) correspond to Hong and Nenkova (2014) average first location model for scoring summaries. Differently from their findings, this model is not effective for sin- gle words in our setting. Position models over n- grams with n > 1 are effective, but their predic- tion is in the opposite direction of that found for the news data – the more important materials tend to appear later in the lecture, as indicated by the positive r between average first position and essay score. These findings underscore the importance of paying attention to the genre of the source ma- terial when developing summarization systems.

6 Read more

Feature and Importance

Feature and Importance

IPv6 is necessary for protocol design, simulating, improving network performance and building application. In this paper we describe the basic knowledge about the IPv6. Then, we analyze the features and the importance of IPv6. NAT is also described in this paper. IPv6 address space is so large that it's not possible to list every address for probing. This method lays a stable foundation for the succeeding probing program to improve efficiency, completeness and avoid redundancy. Tunnelling is the one of the best feature of IPv6. We propose the method of finding the tunnels based on IPv6 path MTU (maximum transfer unit) discovery mechanism to improve the veracity of the result.

9 Read more

The Role of Feature Selection with Applications to Eye Movements using Electrooculography

The Role of Feature Selection with Applications to Eye Movements using Electrooculography

Feature selection is important in many pattern recognition problems for excluding inappropriate and tautological features. Feature Selection improves recognition accuracy by reducing system complexity and processing time. Feature selection is a search problem for finding an optimal or suboptimal subset of m features out of original M features. Many feature subset selection algorithms have been proposed. These algorithms can generally be classified as wrapper or filter algorithms according to the criterion function used in searching for good features. The simplest feature selection methods select best individual features. A feature evaluation function

8 Read more

Which Are the Best Features for Automatic Verb Classification

Which Are the Best Features for Automatic Verb Classification

AVC. Our conclusion is that both syntactic and lex- ical information are useful for verb classification. Although neither SCF nor CO performs well on its own, a combination of them proves to be the most in- formative feature for this task. Other ways of mixing syntactic and lexical information, such as DR, and ACO, work relatively well too. What makes these mixed feature sets even more appealing is that they tend to scale well in comparison to SCF and CO. In addition, these feature sets are devised on a general level without relying on any knowledge about spe- cific classes, thus potentially applicable to a wider range of class distinctions. Assuming that Levin’s analysis is generally applicable across languages in terms of the linking of semantic arguments to their syntactic expressions, these mixed feature sets are potentially useful for building verb classifications for other languages.

9 Read more

Swarm based classifier model using ensemble feature ranking methods

Swarm based classifier model using ensemble feature ranking methods

Analyzing the Figure-2, it is apparent that SSO- ACO achieved higher accuracy than the other pesented classifiers using the three ensemble feature ranking approaches. These reported values indicate that SSO-ACO applied with hybrid feature ranking method has the effect of improving the classifiaction accuracy on all datasets.The improvement was generally higher for the DoS+normal (96.49%) and U2R+normal (95.32%) datasets. SSO-SVM classifier showes a comparable performance whereas, SSO gives similar results for all datasets. Figure-3 reveals that SSO-ACO achieves high

8 Read more

Delayed Umbilical Bleeding—A Presenting Feature for Factor XIII Deficiency: Clinical Features, Genetics, and Management

Delayed Umbilical Bleeding—A Presenting Feature for Factor XIII Deficiency: Clinical Features, Genetics, and Management

The importance of early diagnosis lies in the fact that serious bleeding manifestations can be com- pletely prevented by prophylactic FXIII concentrate. There are few clotting disorders where prophylaxis is so important and so effective. In view of the high risk of intracranial hemorrhage, it is now recognized that all patients with FXIII deficiency should be of- fered prophylactic treatment from the time of diag- nosis. It has been suggested that FXIII levels of ⬃ 3% to 10% of the normal population mean (0.03– 0.1 IU/mL) are sufficient to prevent spontaneous hem- orrhage. 19 With the exception of patient 1, there was

9 Read more

Hierarchical Attention Prototypical Networks for Few Shot Text Classification

Hierarchical Attention Prototypical Networks for Few Shot Text Classification

Previous few-shot learning models for text classi- fication roughly apply text representations or ne- glect the noisy information. We propose to do hi- erarchical attention prototypical networks consist- ing of feature level, word level and instance level multi cross attention, which highlight the impor- tant information of few data and learn a more dis- criminative prototype representation. In the ex- periments, our model achieves the state-of-the- art performance on FewRel and CSID datasets. HAPN not only increases support set augmentabil- ity but also accelerates convergence speed in the training stage.

10 Read more

Prediction of Bike Sharing Demand

Prediction of Bike Sharing Demand

By applying the Linear Combination model separately for each of the two partitions of the data and adding the results for each partition (by virtue of the two partitions being mutually exclusive and exhaustive, we can simply add up the predicted results to get the final result) we obtain a new model - Discriminating Linear Combination. This model could be used to effectively understand the importance of memberships in bike sharing systems.

8 Read more

Show all 10000 documents...