test data sets

Top PDF test data sets:

Chapter 3_handout.pdf

Chapter 3_handout.pdf

– 160 data points for fitting models ( reeddata-fit ); 80 (independent) data points for testing models ( reeddata- test ). – reeddata-fit and reeddata-test data sets available from I[r]

10 Read more

Expressive Reusable Workflow Templates

Expressive Reusable Workflow Templates

However, the software components within a workflow are in many cases not designed to process more than one data set at a time. Consider, for example, a bioinformatician, testing whether a particular gene expression predicts a given phenotype in an organism using a k-nearest neighbor classifier. This classifier typically only classifies one data set at a time, but imagine that the bioinformatician wants to test a newly trained classifier on many test data sets in order to have evidence of the classifier’s efficacy. Or, to find the classifier that produces the best results, the bioinfomatician may want to train and test a collection of alternative algorithms simultaneously. To support this sort of application using available software components, a workflow system needs to be able to represent, reason about and process not only collections of data but also collections of components.
Show more

8 Read more

Gpu Based Parallel Tlbo and Parallel Jaya For Multiple Sequence Alignment Using Mapreduce (Gpu Ptlbo & Gpu Pjaya)

Gpu Based Parallel Tlbo and Parallel Jaya For Multiple Sequence Alignment Using Mapreduce (Gpu Ptlbo & Gpu Pjaya)

Two parallel evolutionary algorithms using two recent algorithmic specific parameter free algorithms PTLBO, PJAYA are proposed. Common control parameters like number of generations is selected by running the algorithm multiple times, based on the point from which generation the fittest value doesn‟t change. The proposed PTLBO not only provided results with good accuracy but also yields in a reduction in computational time. In a nutshell, Similarity Score is calculated for 22 bench mark datasets of ox bench and it was observed that out of 22 test data sets, for 17 test data sets PTLBO performed best whereas PJAYA is best in 3 data sets and PDE is in only one test case. For one data set PTLBO & PJAYA performed equally well. Average Similarity Score of PTLBO is significantly very good when compared with other techniques.
Show more

10 Read more

Essential Data, Budget Sets and Rationalization

Essential Data, Budget Sets and Rationalization

Given an experiment (x, B), the economist can deduce the corresponding essential experi- ment by setting α = A x,B . Alternatively, let us imagine that the essential experiment is the only available one (the full sample may be too complex to be fully memorized or the consumer privately knows his budget sets and the economist just obtains essential budgetary information from the consumer, in a “thought experiment”). Under this interpretation, the essential ex- periment (x, α ) does not necessarily admit a budget representation. In the next section, we introduce a tractable necessary and sufficient condition (“no contradictory statement”) for this property to hold (see also Corollary 1 at the end of Section 4). For the time being, we just assume that (x, α ) admits a budget representation, as it is the case if the essential experiment is simply deduced from some experiment (x, B). 4
Show more

13 Read more

Diversity of Virophages in Metagenomic Data Sets

Diversity of Virophages in Metagenomic Data Sets

protein is generally considered to be a conserved protein among viruses, and it is widely used to reconstruct phyloge- netic trees. It was also conserved in virophages, based on blast sequence similarity searches and sequence alignment. In our study, four complete virophage genomes and one nearly com- plete virophage genome were obtained from two metagenomic databases named Yellowstone Lake: Genetic and Gene Diver- sity in a Freshwater Lake and Antarctica Aquatic Microbial Metagenome, which were downloaded from the CAMERA 2.0 Portal. These virophages were tentatively named YSLV1, YSLV2, YSLV3, YSLV4, and ALM. Detailed results of the met- agenome assembly, i.e., genome coverage, the number of reads recruited to each genome, and the size of the data sets from which the metagenomes originated, are shown in Table 3; see also Figures S1 and S2 in the supplemental material.
Show more

12 Read more

Lossless Compression of SKA Data Sets

Lossless Compression of SKA Data Sets

With the size of astronomical data archives continuing to increase at an enormous rate, the providers and end users of astronomical data sets will benefit from effective data compression techniques. This paper explores different lossless data compression techniques and aims to find an optimal compression algorithm to compress astronomical data ob- tained by the Square Kilometre Array (SKA), which are new and unique in the field of radio astronomy. It was required that the compressed data sets should be lossless and that they should be compressed while the data are being read. The project was carried out in conjunction with the SKA South Africa office. Data compression reduces the time taken and the bandwidth used when transferring files, and it can also reduce the costs involved with data storage. The SKA uses the Hierarchical Data Format (HDF5) to store the data collected from the radio telescopes, with the data used in this study ranging from 29 MB to 9 GB in size. The compression techniques investigated in this study include SZIP, GZIP, the LZF filter, LZ4 and the Fully Adaptive Prediction Error Coder (FAPEC). The algorithms and methods used to per- form the compression tests are discussed and the results from the three phases of testing are presented, followed by a brief discussion on those results.
Show more

10 Read more

Dimensionality reduction of clustered data sets

Dimensionality reduction of clustered data sets

Fig. 2 shows the results of 1) our model, 2) compared with PCA, 3) and with supervised LDA. The first thing to notice is that PCA provides a very poor visualization of the data; one class is reasonably separated, but the other six are completely overlapped. The visualization provided by our model clearly shows the presence of more clusters; in particular, besides the circle class, the diamond class and the crosses class are quite well separated. The visualization obtained with our model is similar to the one obtained with supervised LDA, besides a permutation of classes. In order to give a more quantitative appreciation of the quality of the visualization, we again applied a 1NN classifier to the three projections. This confirmed that the projection given by PCA indeed does not respect the clustered structure; only 49.5 percent of the projected points are nearest to a point of the same class. The situation is dramatically better in our model, with the percentage of points closest to points in the same class rising to 74.3 percent. This comes very close indeed to the 75.7 percent obtained by LDA (which obviously employed the class information). It may be worthwhile reporting that the same test Fig. 1. Experimental results. (a) Toy data: The solid dotted line represents the first principal direction of the data; the solid line is the initialization (obtained using k-means followed by Fisher’s discriminant); the dashed and dotted line gives the maximum likelihood estimate of the model, which coincides with (supervised) Fisher’s discriminant. (b) Iris data set: The three classes are shown as triangles, crosses, and circles.
Show more

7 Read more

Interaction Data Sets In The UK: An Audit

Interaction Data Sets In The UK: An Audit

At a time when the results of the 2001 Census are becoming more and more outdated, it also makes sense to complement and enhance our interpretation of the census flow statistics with interaction data from other sources. This is the raison d’etre for this paper which reports on an audit that we have conducted of all types of interaction data that are available on a national basis in the UK. Moreover, having developed WICID for the specific function of storing and providing users with access to census interaction data, it is timely to explore the possibilities of adapting the system for other origin-destination flow data sets that will provide new and beneficial insights into human interaction behaviour. It would be extremely useful, for example, to bring data sets into WICID that allow us to identify migration or commuting trends between censuses or to be able to substantiate the magnitude and pattern of flows derived from census sources with flows derived from alternative sources, e.g. for students or immigrants, or even to study less well known population flows not picked up by the standard Census questions, such as those experienced by patients travelling to receive treatment.
Show more

132 Read more

Spatial prediction models for landslide hazards: review, comparison and evaluation

Spatial prediction models for landslide hazards: review, comparison and evaluation

lines and corrected according to air photos and field measure- ments. The nominal resolution of the DEM is 5 m. Similar to the papers discussed earlier in this work, the terrain pa- rameters used range from local parameters (elevation, slope, aspect, plan, profile and total curvature, convergence index) to parameters that depend on topological site characteristics (contributing area, its height, mean slope and mean aspect; vertical distance to channel network and from ridge). In the case of skewed variables or variables for which a nonlinear relationship is to be expected, simple transforms (logarithm; binary splits such as “distance to past landslides smaller than 200 m”) were added without regard of the covariates’ actual empirical relation to the response, i. e. without fitting the co- variates to the data manually. Terrain parameters were com- puted using the software SAGA 3 .
Show more

10 Read more

Multiple Target Test Generators for Optimizing the Atpg Process to Reduce Number of Test Sets

Multiple Target Test Generators for Optimizing the Atpg Process to Reduce Number of Test Sets

The manufacturing test is an important step in the production process of computer chips. A test set is applied to each fabricated chip in order to detect defective devices. One important factor for the test costs is the test data volume and the size of the test set. The growing complexity of today’s designs leads to rapidly increasing test data and consequently to high test costs. Therefore, much effort is spent to reduce the test data. Two different techniques are generally used to reduce the test data. Test compression applies additional hardware to compress test cubes and responses. Test compaction techniques reduce the number of test patterns (ideally without reducing the fault coverage) to save test data.
Show more

8 Read more

Online Full Text

Online Full Text

Abstract—This paper is dealing with the problem of tissue characterization of the plaque in the coronary arteries by processing the data from the intravascular ultrasound catheter. Two similarity-based methods are proposed in the paper, namely the histogram-based and the center-of-gravity-based method. Both of them use the general computational strategy of the moving window with fixed size and maximal overlapping ratio. The obtained similarity results are graphically displayed in two modes: hard decision with a given threshold and soft decision with gradual changes in the dissimilarity values.
Show more

6 Read more

Prepare and Optimize Data Sets for Data Mining Analysis

Prepare and Optimize Data Sets for Data Mining Analysis

usually the tedious errand in a data mining task, needing numerous complex SQL queries, joining tables and conglomerating sections. Existing SQL aggregations have limitations to get ready data sets since they give back one section for every amassed bunch. As a rule, a significant manual exertion is obliged to construct data sets, where a horizontal layout is needed. We propose straightforward, yet effective methods to generate SQL code to return totaled sections in a horizontal even layout, giving back a set of numbers rather than one number for every line. This new class of functions is called horizontal aggregations. Horizontal aggregations construct data sets with a horizontal denormalized layout, which is the standard layout needed by most data mining algorithms. We propose three basic strategies to evaluate horizontal aggregations: CASE: Exploiting the programming CASE construct; SPJ: Based on standard relational algebra operators (SPJ queries); PIVOT: Using the PIVOT operator, which is offered by some DBMSs. Horizontal aggregations results in large volumes of data sets which are then partitioned into homogeneous clusters is important in the system. This can be performed by K Means Clustering Algorithm
Show more

8 Read more

Restricted Algorithms System Sets Sets Your Constructive

Restricted Algorithms System Sets Sets Your Constructive

functions or systems of functional equations using systems-kits is expressed as units / SETS due to building restrictions as sets. The processing systems determinants of different interactions between elements of sets, is basically the projection geometrically different planes, sometimes simplifying dimensional global different expressions.

10 Read more

DATA CLEANING TECHNIQUES FOR SOFTWARE ENGINEERING DATA SETS

DATA CLEANING TECHNIQUES FOR SOFTWARE ENGINEERING DATA SETS

Table 5.5 shows a comparison of the three different noise handling techniques and the bench- mark approach of doing nothing. Recall that since the EDS data set is drawn from the real world any definitive statements cannot be made concerning the ‘true’ noise level. Conse- quently proxy measures were used based upon the ability of a rule tree classifier to correctly classify the project effort of each instance and also to isolate implausible instances. The ini- tial data set contained n = 8888 instances after projects are eliminated where development effort was unknown. The number of instances eliminated, e depended upon the technique. However, note that the filtering and polish method eliminated zero instances (also the do- nothing strategy) since values were edited rather than removed. This might be seen as an advantage compared with the robust filtering method which eliminated more than 6200 in- stances (i.e. in excess of 70% of all cases) and the filtering method which eliminated more than 5800 instances (i.e. in excess of 65% of all instances) 2 . Next it can be observed the relative number of implausible instances, i that were not identified and therefore not elim- inated. For the do-nothing technique all 347 remained whereas the robust filter was able to eliminate just over 88% of such instances. This is indicative of the effectiveness of the approach inasmuch as it may be believed that this gives an indication of the ability of the technique to remove the non-implausible noisy and therefore unidentified instances. The surprising value for i is for the filtering and polish technique which actually generated new (i.e. not previously contained in the data set) implausible instances. This is a consequence of the way in which new values are imputed for those cases that are filtered. It is also consis- tent with Teng’s results [179, 180]. Ultimately it is not believed that this fact is too serious since implausible instances can always be detected algorithmically. The other instances that are contained in the data set that are problematic were those that had zero productivity values. These may be considered as a special instances of implausible value, however, the cause is due to missing values rather than noise. As stated previously the data set contained a substantial number, 7436, of problematic instances where size information was unavailable
Show more

192 Read more

Trustworthiness of Big Data

Trustworthiness of Big Data

We don’t know what final end users are expecting unless we collaborate with them and include them into reviewing out test scenarios and test approach. In most cases the independent testing team will not have access to end users. This is a core challenge as the tester herein simulates user behavior without much interaction with them. Look for data sampling wherever possible, including any call logs, user feedback and usage patterns, and collaborative filters from previous releases, keeping in mind that ongoing brainstorming with the product team is important in arriving at test scenarios and an optimized test matrix
Show more

6 Read more

On combination of SMOTE and particle swarm optimization based radial basis function classifier for imbalanced problems

On combination of SMOTE and particle swarm optimization based radial basis function classifier for imbalanced problems

For balanced data sets, the prevalent approach for con- structing the RBF and other sparse kernel classifiers is to assign a fixed common variance for every kernel and to select input data as the candidate centers for RBF kernels by minimizing the leave-one-out (LOO) misclassification rate in the efficient OFS procedure [32]. This approach has its root in regression application [33]-[36]. There are two limitations with this “fixed” RBF kernel approach. Firstly, kernels cannot be flexibly tuned, as the position of each kernel is restricted to the input data and the shape of each kernel is fixed rather than determined by the model learning procedure. Secondly, the common kernel variance has to be determined via cross validation, which inevitably increases the computational cost. The previous studies [37]-[39] constructed the tunable RBF classifier based on the OFS procedure using a global search optimization algorithm [40] to optimize the RBF kernels one by one. This tunable RBF kernel approach was observed to produce sparser classifiers with better performance but higher computational complexity in classifier construction, in com- parison with the standard fixed kernel approach. Recently, the particle swarm optimization (PSO) algorithm [41] was adopted to minimize the LOO misclassification rate in the OFS construction of tunable RBF classifier [42][43]. PSO [41] is an efficient population-based stochastic optimization technique inspired by social behaviour of bird flocks or fish schools, and it has been successfully applied to wide-ranging optimization applications [44]-[48]. Owing to the efficiency of PSO, the tunable RBF modeling approach advocated in [42][43] offers significant advantages in terms of better generalization performance and smaller classifier size as well as lower complexity in learning process, compared with the standard fixed kernel approach. This PSO aided tunable RBF classifier, therefore, offers the state-of-the-art for balanced data sets. When dealing with highly imbalanced problems, however, its performance may degrade.
Show more

8 Read more

The WebNLG Challenge: Generating Text from RDF Data

The WebNLG Challenge: Generating Text from RDF Data

The WebNLG challenge consists in mapping sets of RDF triples to text. It provides a common benchmark on which to train, eval- uate and compare “microplanners”, i.e. gen- eration systems that verbalise a given con- tent by making a range of complex interacting choices including referring expression gener- ation, aggregation, lexicalisation, surface real- isation and sentence segmentation. In this pa- per, we introduce the microplanning task, de- scribe data preparation, introduce our evalua- tion methodology, analyse participant results and provide a brief description of the partici- pating systems.
Show more

10 Read more

Classification-based comparison of pre-processing methods for interpretation of mass spectrometry generated clinical datasets

Classification-based comparison of pre-processing methods for interpretation of mass spectrometry generated clinical datasets

Pre-processing is the first essential step in the analysis of mass spectrometry generated data. Inadequate pre- processing has been shown to have a negative effect on the reproducibility of biomarker identification and the extrac- tion of clinically useful information [11,12]. Since there is no generally accepted approach to pre-processing, differ- ent methods have been proposed, for example [13-17]. Given the large number of existing pre-processing tech- niques, one would like to know which one is most effec- tive. Therefore, the comparison of pre-processing techniques has recently gained new interest. Cruz-Marcelo et al. [18] and Emanuele et al [19] compared five and nine, pre-processing methods respectively. However, these studies evaluate the strengths and weaknesses of the dif- ferent methods on simulated data and quality control datasets. Moreover, the performance of a pre-processing method is only evaluated in terms of reproducibility (coefficient of variation) and sensitivity/specificity of peak detection. While providing important information, our goal in this paper is to compare pre-processing meth- ods in a clinical setting with a relevant and measurable objective. A realistic clinical setting is provided for by in- house ovarian cancer and Gaucher disease profiling data- sets and our objective is to maximize classification per- formance across five different classification methods. We compare the method implemented in Ciphergen Protein- Chip Software 3.1 with the mean spectrum technique from the Cromwell package [5] in a classification setting. Ciphergen was included since it is still the most com-
Show more

17 Read more

Diagnosis of Shorted-Turns Faults in Electrical Machine using Neural Network

Diagnosis of Shorted-Turns Faults in Electrical Machine using Neural Network

well and also shows some relationships between FI and NN approach. This leads to a design process of a work flow for the NN. The work flow has three stages: data acquisition, training algorithm and diagnosis and detection of machine condition. The data acquisition involves collection of the electrical machine data (stator currents and voltages) into the computer for analysis and diagnosis purpose. The training algorithm creation, configuration, training and validation of NN from the machine data captured. The diagnosis and the decision of the machine conditions implies, once the network has been trained with the machine data captured, it can be used to test other sets of data to determine the condition of the machine. In order to test the NN, 200 samples of data sets is taken outside the ones used in the network and it is used on the network. When the sets of 200 data samples gives an approximate value of 1, the machine is operating in healthy condition. However, when the sample data gives an approximate value of 2.6, then a shorted- turn fault is detected. There is high correlation coefficient of R = 0.9992, R = 0.99917 and R = 0.99923 in the training, validation and test phases that contain all networks for the NN model respectively in Figure 8. The overall correlation for the (training, validation and test) phases is R = 0.9992. This implies that the model gives high correlation coefficient between predicted outputs and targets. Using the NN model, the healthy and shorted-turn electrical machine conditions are correctly predicted in Figure 10. Thus, this is robust and reliable to diagnose shorted-turn fault in the electrical
Show more

5 Read more

Assessing and generating test sets in terms of behavioural adequacy

Assessing and generating test sets in terms of behavioural adequacy

The basic approach (as presented in [4]) is shown in Figure 3 (the arcs are numbered to indicate the flow of events). The test generator produces tests according to some fixed distribution D that are executed on the system under test (SUT) c. With respect to the conventional PAC framework, they combine to perform the function of EX.c; D /. The process starts with the generation of a test set A by the test generator (this is what we are assessing for adequacy). This is executed on the SUT; the executions are recorded and supplied to the inference tool, which infers a hypothetical model that predicts outputs from inputs. Now, the test generator supplies a further test set B. The user may supply the acceptable error bounds and ı (without these the testing process can still operate, but without conditions for what constitutes an adequate test set). The observations of test set B are then compared against the expected observations from the model, and the results are used to compute error D .h/. If this is smaller than , the model inferred by test set A can be deemed to be approximately accurate (i.e. the test set can be deemed to be approximately adequate).
Show more

32 Read more

Show all 10000 documents...