statistical algorithms

Top PDF statistical algorithms:

MIA Sig: multiplex chromatin interaction analysis by signal
processing and statistical algorithms

MIA Sig: multiplex chromatin interaction analysis by signal processing and statistical algorithms

In parallel, algorithms have been developed to analyze protein-enriched 3D architecture data from assays such as ChIA-PET. Similar to Hi-C, ChIA-PET data are also prone to bias and noise, which are computationally fil- tered out by statistical algorithms such as ChIA-PET tool (Li et al. [16]) and chiasig (Paulsen et al. [23]). The main idea is to model interaction frequency between two loci as hypergeometric distribution or the non- central hypergeometric distribution. To accommodate recently developed variants HiChIP (Mumbach et al. [21]) and PLAC-seq (Fang et al. [8]), researchers devel- oped hichipper (Lareau and Aryee [15]), fithichip (Bhat- tacharyya et al. [4]), and MAPS (Juric et al. [11]) to remove systematic biases and identify significant loops. In ChIA-PIPE (Capurso et al. [5]), the de-noising is done by filtering out loops without peak supports in the an- chors. Unfortunately, these tools are specifically designed to model interactions between two loci and would not readily generalize to those involving more than two loci.

13 Read more

Comparison of statistical algorithms for daily syndromic surveillance aberration detection

Comparison of statistical algorithms for daily syndromic surveillance aberration detection

We used different measures to evaluate the performance of the de- tection systems in the presence of outbreaks. The measures are the power of detection (POD), sensitivity (also known as the true posi- tive), specificity (also known as the true negative or as ‘1-false posi- tive rate’), positive predictive value (also known as PPV or precision) and timeliness. POD is the probability of having an alarm at least once during a spiked outbreak i.e. the probability of detect- ing the outbreak; sensitivity is the proportion of alarms among spike outbreak days; specificity is the proportion of no alarms among non-outbreak days; PPV is the proportion of detected outbreaks that are true positives i.e. the proportion of detections that are correct; timeliness is the proportion of days elapsed to detect an outbreak since its start. This measure of timeliness prevents undue weight being given to poor performance during a very long outbreak, which is a problem if timeliness is measured as the number of days since the start of an outbreak. If an outbreak was not detected, then time- liness was set to 1. Sensitivity and specificity are a rate per day whereas POD, PPV and timeliness are a rate per outbreak. For each of the 16 simulated signals, all five measures are computed from running the algorithms to the most recent 49 weeks (343 days) of the 100 simulations across each of the four sizes of ‘spiked outbreaks’. We note that we use these measures to evaluate the detection of just ‘spiked outbreaks’ in the presence of ‘spiked outbreaks’, ‘seasonal outbreaks’ and public holidays. Below are the explicit formulae used to compute each measure:

9 Read more

Design and Implementation of A New CFAR Based on Weighted and Statistical Algorithms

Design and Implementation of A New CFAR Based on Weighted and Statistical Algorithms

In this paper, The CFAR, which can work with the most environment and most of the target situations, has been presented. The producing the design and implementation of the new practical CFAR processor is presented. Where, the new CFAR is a combination of the properties of three different CFAR algorithm (CA, OSGO, and OSSO), and from two different families; averaging and statistical. Where it has overperformed of it's 𝑃 𝐷 is 97.25% for simulation and 96.25% for the

20 Read more

Automatic Derivation of Statistical Algorithms: The EM Family and Beyond

Automatic Derivation of Statistical Algorithms: The EM Family and Beyond

Machine learning has reached a point where many probabilistic meth- ods can be understood as variations, extensions and combinations of a much smaller set of abstract themes, e.g., as different instances of the EM algorithm. This enables the systematic derivation of algorithms cus- tomized for different models. Here, we describe the A UTO B AYES sys- tem which takes a high-level statistical model specification, uses power- ful symbolic techniques based on schema-based program synthesis and computer algebra to derive an efficient specialized algorithm for learning that model, and generates executable code implementing that algorithm. This capability is far beyond that of code collections such as Matlab tool- boxes or even tools for model-independent optimization such as BUGS for Gibbs sampling: complex new algorithms can be generated with- out new programming, algorithms can be highly specialized and tightly crafted for the exact structure of the model and data, and efficient and commented code can be generated for different languages or systems. We present automatically-derived algorithms ranging from closed-form solutions of Bayesian textbook problems to recently-proposed EM algo- rithms for clustering, regression, and a multinomial form of PCA.

8 Read more

Statistical algorithms for ontology-based annotation of scientific literature

Statistical algorithms for ontology-based annotation of scientific literature

We have demonstrated a stochastic framework for annotating BrainMap literature using the Cognitive Paradigm Ontology. Unlike text mining algorithms, our framework can model the knowledge encoded by the dependencies in the ontology, albeit indirectly. We successfully exploit the fact that CogPO has explicitly stated restrictions, and implicit dependencies in the form of patterns in the expert curated annotations. The advantage of our pragmatic approach is that it is robust to explicit future modifications and additions that could be made to the relationships and restrictions in CogPO. Since we do not expli- citly model the relations and restrictions, but capture them implicitly from training patterns, we do not have to make corresponding updates to our algorithm each time CogPO is updated by humans. We merely need to have a correctly annotated body of work. The constrained decision tree architecture further improves upon the naïve Bayes results. When we fix the first node of the decision tree, there is a significant improvement in the annotation accuracy. This is a useful tool for aiding a human expert in the annota- tion task.

15 Read more

A novel data mining system points out hidden relationships between immunological markers in multiple sclerosis

A novel data mining system points out hidden relationships between immunological markers in multiple sclerosis

Unfortunately, the overall immunological results of lit- erature are sometimes conflicting and often insufficient to disclose the effective relationship among studied variables. Although differences between trial designs, patients’ popu- lation, immunological markers, and technical methodolo- gies can explain the most of this inconsistency, statistical analyses might be an important factor to be considered. Thus, traditional statistical algorithms are both unsuitable and underpowered to dissect the relationship between high number of markers due to the non-linearity and complexity of the immunological network; a fuzzy cluster- ing approach based on evolutionary programming (PST) and Semantic connectivity map (AutoCM) could find the natural associations among immunological markers.

7 Read more

Rician statistical assumptions for geographic routing in wireless sensor networks

Rician statistical assumptions for geographic routing in wireless sensor networks

The widely used mathematical error model considers location errors normally distributed [5], [6], [7], [8], which facilitates the assumption that the distances between nodes have a Rice distribution. The Rice statistical statement is possible only if a simplifying assumption is made i.e. the variance of the estimated x and y coordinates is equal. However, this is not necessarily always true and can affect the forwarding algorithms if based on Rician assumptions. This work presumes the Rician hypothesis true and verifies it via MATLAB simulations, when employing a realistic localisation simulation (here based on received signal strength (RSS) ranging).

16 Read more

A Comparative Study on Bearings Faults Classification by Artificial Neural Networks and Self-Organizing Maps using Wavelets

A Comparative Study on Bearings Faults Classification by Artificial Neural Networks and Self-Organizing Maps using Wavelets

This paper presents a comparative study between soft computing techniques Artificial Neural networks (ANN) and Self-Organizing Maps (SOM) using continuous wavelet transform (CWT) for fault diagnosis of rolling element bearings. Six different base wavelets three real valued and three complex valued are considered. Out of these six wavelets, the base wavelet minimizing the Shannon Entropy is selected to extract statistical features from wavelet coefficients of raw vibration signals. Finally, bearing faults are classified using these statistical features as input to two soft computing techniques i.e. ANN and SOM. Complex Morlet wavelet is selected based on Shannon Entropy Criterion using proposed methodology. The test results show that the ANN identify the fault categories of rolling element bearing more accurately and has a better diagnosis performance compared to the SOM.

8 Read more

A Study on Brand Image towards Nandini Products at Manmul in Mandya

A Study on Brand Image towards Nandini Products at Manmul in Mandya

The Friedman test is based on n sets of ranks, one set for each data set in our case; and the performances of the algorithms analyzed are ranked separately for each data set. Such a ranking scheme allows for intra-set comparisons only, since inter-set comparisons are not meaningful. When the number of algorithms for comparison is small, this may pose a disadvantage. In such cases, comparability among data sets is desirable and we can employ the method of aligned . In this technique, a value of location is computed as the average performance achieved by all algorithms in each data set. Then, it calculates the difference between the performance obtained by an algorithm and the value of location. This step is repeated for algorithms and data sets. The resulting differences, called aligned observations, which keep their identities with respect to the data set and the combination of algorithms to which they belong, are then ranked from 1 to kn relative to each other. Then, the ranking scheme is the same as that employed by a multiple comparison procedure which employs independent samples; such as the Kruskal–Wallis test. The ranks assigned to the aligned observations are called aligned ranks.

9 Read more

Sample Selection for Statistical Parsers: Cognitively Driven Algorithms and Evaluation Measures

Sample Selection for Statistical Parsers: Cognitively Driven Algorithms and Evaluation Measures

TNSK and ANSK for the task of sample selection for statistical parsers. Based on the psycholinguis- tic literature we argue that these measures reflect as- pects of the cognitive efforts of the human annota- tor that are not reflected by the traditional TC mea- sure. We introduced the parameter based sample se- lection (PBS) approach and its CMM and CBS algo- rithms that do not deliberately select difficult sen- tences. Therefore, our intuition was that they should select a sample that leads to an accurate parameter estimation but does not contain a high number of complex structures. We demonstrated that CMM and

9 Read more

Accelerating MCMC algorithms

Accelerating MCMC algorithms

In a perspective different from the above parallel scheme of divide-and-conquer approaches, subsampling approaches aim at reducing the number of individual datapoint likelihood evaluations operated at each iteration toward accelerating MCMC algorithms. From a general perspective, these approaches can be further classified into two finer classes: exact subsampling methods and approximate subsampling methods, depending on their resulting outputs. Exact subsampling approaches typi- cally require subsets of data of random size at each iteration. One solution to this effect is taking advantage of pseudo- marginal MCMC via constructing unbiased estimators of the target density evaluated on subsets of the data (Andrieu & Rob- erts, 2009). Quiroz, Villani, and Kohn (2016) follow this direction by combining the powerful debiasing technique of Rhee and Glynn (2015) and the correlated pseudo-marginal MCMC approach of Deligiannidis, Doucet, and Pitt (2015). Another direction is to use piecewise deterministic Markov processes (PDMP) (Davis, 1984, 1993), which enjoy the target distribution as the marginal of their invariant distribution. This PDMP version requires unbiased estimators of the gradients of the log- likelihood function, instead of the likelihood itself. By using a tight enough bound on the event rate function of the associated Poisson processes, PDMP can produce super-efficient scalable MCMC algorithms. The bouncy particle sampler (Bouchard- Côté, Vollmer, & Doucet, 2017) and the zig-zag sampler (Bierkens, Fearnhead, & Roberts, 2016) are two competing PDMP algorithms, while Bierkens et al. (2017) unify and extend these two methods. Besides, one should note that PDMP produces a non-reversible Markov chain, which means that the algorithm should be more efficient in terms of mixing rate and asymptotic variance, when compared with reversible MCMC algorithms, such as MH, HMC, and MALA, as observed in some theoretical and experimental works (Bierkens, 2016; Chen & Hwang, 2013; Hwang, Hwang-Ma, & Sheu, 1993; Sun, Gomez, & Schmid- huber, 2010).

14 Read more

Reconstruction of global gridded monthly sectoral  water withdrawals for 1971–2010 and analysis  of their spatiotemporal patterns

Reconstruction of global gridded monthly sectoral water withdrawals for 1971–2010 and analysis of their spatiotemporal patterns

Historical water withdrawal records by sectors are re- ported by many agencies or organizations. Shiklomanov and Rodda (2003) published a global water resources assessment (including water withdrawal and consumption data) for 26 regions according to literature review and statistical surveys. Additionally, estimated water use by sectors (irrigation, live- stock, domestic, industry, and hydroelectric power) at state and county level in the US has been reported by the US Ge- ological Survey (USGS) every 5 years since 1950, and 1985, respectively. Similar historical water use reports are also pub- lished by the Ministry of Water Resources of China, the Statistisches Bundesamt of Germany, the Ministry of Land Infrastructure and Transportation in Japan, and the Water Se- curity Agency of Canada. Consolidating these sub-national water withdrawal data, which are reported by various organi- zations and institutions, can be challenging due to the poten- tial inconsistencies in the definition of sectoral water with- drawals. Another global water use inventory, AQUASTAT, which has been developed by the Food and Agriculture Orga- nization (FAO), provides historical water withdrawals in par- ticular sectors (agriculture, irrigation, domestic, and indus- try) every 5 years at the country level. Unfortunately, these historical records in some regions or water use sectors are often incomplete or missing. Recently, Liu et al. (2016) de- veloped a country-scale water withdrawal dataset by sector at a 5-year interval for 1973–2012 by filling the missing values in the FAO AQUASTAT dataset. Furthermore, most existing water withdrawal inventories have been published at an an- nual scale or 5-year interval for a particular region, which ignores the seasonal and spatial variations (aside from the irrigation estimates by models). The coarseness in data gran- ularity may cause inadequate understanding for finer-scale water use and hold back water management policy develop- ment.

17 Read more

Feature Decay Algorithms for Fast Deployment of Accurate Statistical Machine Translation Systems

Feature Decay Algorithms for Fast Deployment of Accurate Statistical Machine Translation Systems

In this paper, we develop parallel feature decay algorithms (FDA) for solving computational scal- ability problems caused by the abundance of train- ing data for SMT models and LM models and still achieve SMT performance that is on par with us- ing all of the training data or better. Parallel FDA runs separate FDA models on randomized subsets of the training data and combines the instance se- lections later. We perform SMT experiments in all language pairs of the WMT13 (Callison-Burch et al., 2013) and obtain SMT performance close to the baseline Moses (Koehn et al., 2007) system us- ing less resources for training. With parallel FDA, we can solve not only the instance selection prob- lem for training data but also instance selection for the LM training corpus, which allows us to train higher order n-gram language models and model the dependencies better.

7 Read more

Analytical Study of Different Classification Technique for KDD Cup Data'99

Analytical Study of Different Classification Technique for KDD Cup Data'99

tools for perdition of unknown, valid patterns and relationship in large datasets. Since the data mining has matured or evolved, it becomes a vital part of KDD, i.e. knowledge data discovery which encompasses the procedural steps, data selection, data preprocessing, data transformation, data mining, interpretation and evaluation. There are many data mining techniques that are frequently used, artificial neural network, rule association, memory based reasoning or case based reasoning, cluster analysis, classification algorithm and decision tree, genetic algorithms [10].

5 Read more

Statistical Delay Bounds Oriented Packet Scheduling Algorithms in High Speed Networks

Statistical Delay Bounds Oriented Packet Scheduling Algorithms in High Speed Networks

Packet scheduling and resource reservation are tightly coupled. To see this, we notice that the very purpose of scheduling algorithms is to provide delay QoS to traffic flows via bandwidth management. In order to make such a QoS provisioning a priori, some form of bandwidth reservation, either explicit or implicit, is indispensable, as otherwise the random background traffic flows may destroy the QoS during the bandwidth competition process. On the other hand, how bandwidth should be reserved critically depends on the performance of the scheduling algorithms chosen. Another condition needed to make such a delay QoS provisioning a priori is some form of source traffic regulation, as without it (in which case source nodes could generate traffic arbitrarily) the bandwidth reservation becomes meaningless. Therefore, it takes packet scheduling, resource reservation and source traffic regulation all together to provide delay QoS. This coupling in turn determines how the flow specification (QoS contract) should be made. The relationship among these functions and delay QoS provisioning can be understood in Fig. 1.1. The queueing system in Fig. 1.1 can be considered as either local (a switching node) or end-to-end (a tandem of switching nodes).

155 Read more

Statistical Study of Least Mean Square and Normalised Least Mean Square Algorithms

Statistical Study of Least Mean Square and Normalised Least Mean Square Algorithms

The study of adaptive filters with non-stationary inputs is a very complex subject. In this the performances of the two algorithms are compared and the NLMS algorithm is chosen on the basis of stability, transient response and steady- state behavior. The results of this paper suggest that the NLMS algorithm (with regularization) can be used effectively with cyclostationary inputs such as voice data. Indeed, this is precisely the behavior that is observed, for instance, with most voice-band echo cancellers.

8 Read more

Technical Methods and Algorithms for Developing Efficient Optical Character Recognition System: An Overview

Technical Methods and Algorithms for Developing Efficient Optical Character Recognition System: An Overview

Pattern recognition replication of human works, like reading, is a long time vision of the researchers but in recent periods, system reading using OCR has grown to a vision established. OCR is a good application of technology progress in the area of artificial intelligence in computer vision and pattern recognition. Commercial application services exists for performing OCR for different applications, but the OCR systems are still not able to compete with human reading patterns. In this study, different technical algorithms for efficient OCR systems were critically reviewed.

8 Read more

Automatic Derivation of Statistical Data Analysis Algorithms: Planetary Nebulae and Beyond

Automatic Derivation of Statistical Data Analysis Algorithms: Planetary Nebulae and Beyond

Abstract. A UTO B AYES is a fully automatic program synthesis system for the data analysis domain. Its input is a declarative problem description in form of a statistical model; its output is documented and optimized C/C++ code. The synthesis process relies on the combination of three key techniques. Bayesian networks are used as a compact internal representation mechanism which enables problem decompositions and guides the algorithm derivation. Program schemas are used as independently composable building blocks for the algorithm construction; they can encapsulate advanced algo- rithms and data structures. A symbolic-algebraic system is used to find closed-form solutions for problems and emerging subproblems.

16 Read more

A Survey On Data Mining Algorithm

A Survey On Data Mining Algorithm

Abstract— This paper puts forward the 8 most used data mining algorithms used in the research field which are: C4.5, k-Means, SVM, EM, PageRank, Apriori, kNN and CART. With each algorithm, a basic explanation is given with a real time example, and each algorithms pros and cons are weighed individually. These algorithms are seen in some of the most important topics in data mining research and development such as classification, clustering, statistical learning, association analysis, and link mining.

5 Read more

End-to-End Methodology for Mining from Datasets

End-to-End Methodology for Mining from Datasets

Technically, data mining is the process of finding correlations or patterns among dozens of fields in large relational databases. Data mining is widely used domain for extracting trends or patterns from historical data. Generally, data sets that are stored in a relational database or a data warehouse come from On-Line Transaction Processing (OLTP) systems in which database schemas are highly normalized. But data mining, statistical or machine learning algorithms generally require aggregated data in summarized form. Also many data mining algorithms require the result to be transformed into tabular format. Tabular datasets are the suitable input for many data mining approaches. In any competitive business, success is based on the ability to make an item more appealing to customers than the competition. A number of questions arise in the context of this task: how do we formalize and quantify the competitiveness between two items? Who are the main competitors of a given item? What are the features of an item that most affect its competitiveness? Despite the impact and relevance of this problem to many domains, only a limited amount of work has been devoted toward an effective solution. In this paper, we present a formal definition of the competitiveness between two items, based on the market segments that they can both cover. Our evaluation of competitiveness utilizes customer reviews, an abundant source of information that is available in a wide range of domains. We present efficient methods for evaluating competitiveness in large review datasets and address the natural problem of finding the top-k competitors of a given item. Finally, we evaluate the quality of our results and the scalability of our approach using multiple datasets from different domains.

8 Read more

Show all 10000 documents...