two-step reduction algorithm

Top PDF two-step reduction algorithm:

Two-Step-SDP approach to clustering and dimensionality reduction

Two-Step-SDP approach to clustering and dimensionality reduction

Comparing the results obtained using the Two-Step-SDP algorithm presented in the Table II and the standard K-means algorithm displayed in the Table III, we can conclude that in general, in terms of computational time and solution, both approaches are quite efficient. For the leukemia data set, the K-mean algorithm performs quite faster. Notice that the K-means had to be executed for clustering attributes and objects, thus, the computational time is the sum of the running times of both procedures. With respect to the clusters of attributes, the K-means algorithm does not provide further information on the variance explained by the resulting components, while the Two-Step- SDP algorithm does. It can be observed that in almost all cases, the values of the within and between cluster deviances obtained using the Two-Step-SDP or the K-means algorithms are quite similar. The major difference in the performance of these algorithms is for the Soybean data set. The Two-Step-SDP algorithm returned the within cluster deviance equal to 453.12 and the between cluster deviance equal to 2760.87 , while the K-means algorithm returned the within cluster deviance equal to 205.96 and the between cluster deviance equal to 484.2 . Regarding the clusters of the attributes, it can be observed that the Two-Step-SDP and the K-means algorithms return clusters of equal sizes, with exception for the Soybean and SRBCT data sets. With respect to the clusters of objects, these algorithms have returned different clusters for the Iris, Soybean and SRBCT data sets.
Show more

18 Read more

A Two Step Adaptive Noise Cancellation System for Dental Drill Noise Reduction

A Two Step Adaptive Noise Cancellation System for Dental Drill Noise Reduction

An alternative approach for reducing the additive noise signal in voice communication systems is the ANC system, which employs an adaptive filter. The coefficients of the adaptive filter are adapted according to the error signal minimization. The performance of the ANC system is specified by the choice of the adaptive filtering algorithm. In contrast to the SS method, the ANC-based NR technique does not need any VAD to distinguish between speech and noise frames. Furthermore, the ANC system does not lead to the musical noise effect. Normally, the ANC system requires the use of two microphones. The first microphone signal contains the noisy speech signal and is known as the primary signal. The second microphone, on the other hand, is assumed to be located very close to the noise source and far away from the speech source so that it picks up mostly the additive noise signal, but not the desired speech signal, and is referred to as the reference signal. In fact, it is impossible to place the second microphone to pick up only the additive noise signal, without being contaminating by the desired speech signal.
Show more

16 Read more

Effective Detection of Toxigenic Clostridium difficile by a Two Step Algorithm Including Tests for Antigen and Cytotoxin

Effective Detection of Toxigenic Clostridium difficile by a Two Step Algorithm Including Tests for Antigen and Cytotoxin

Others have obtained antigen results like ours by studying the same C. DIFF CHEK-60 (21, 24) or the Triage C. difficile panel (Biosite Diagnostics, San Diego, Calif.) that separately detects antigen and toxin A via membrane-EIA (2, 3, 10, 11, 16, 22). Several of these investigators recognized the potential value of Ag-EIAs in screening for toxigenic C. difficile, espe- cially as an alternative to isolating C. difficile in culture. In particular, Landry et al. (10) proposed a two-step test that consisted of the Triage panel and CCNA for antigen-positive/ toxin A-negative specimens. This approach had the advantage of rapidly identifying certain toxigenic (toxin A-positive) strains, with a ⬇ 75% reduction in cell culture workload (sim- ilar to ours because Triage toxin A sensitivity was 33%). Snell et al. (21) recently recommended a three-step approach, con- sisting of Ag-EIA, toxin-EIA, and CCNA. The relative sim- plicity of our algorithm eliminates the possibility of false-pos- itive toxin-EIA results, while yielding sensitivity, turn-around time, and cost-effectiveness similar to those assays discussed above.
Show more

5 Read more

Two Step Resource Block Allocation Algorithm for Data Rate Maximization in LTE Downlink Systems

Two Step Resource Block Allocation Algorithm for Data Rate Maximization in LTE Downlink Systems

Compared with 2G and 3G networks, The LTE systems can provide higher data rates and better transmission quality. Given limited radio resources, one challenge in LTE systems is to support a large number of users and satisfy different users' data rate requirements. Orthogonal Frequency Division Multiplexing (OFDM) is used as the basic transmission scheme in LTE downlink systems, which can achieve high system capacity. By using OFDM technique multiple users can share OFDM sub-carriers in a certain time slot [1-4]. In LTE systems, sub-carriers are grouped into Resource Blocks (RBs), and each RB con- sists of 12 adjacent sub-carriers. Two consecutive RBs are called an Allocation Unit (AU) [5]. In practical sys- tems, all AUs allocated to a given user should use the same MCS, which is determined by the AU with the worst channel condition for this user.
Show more

6 Read more

A Two Step Secure Spectrum Sensing Algorithm Using Fuzzy Logic for Cognitive Radio Networks

A Two Step Secure Spectrum Sensing Algorithm Using Fuzzy Logic for Cognitive Radio Networks

proposed that uses robust statistics to approximate the distribution for both hypotheses of all users, discriminat- ingly, based on their past data report. The authors in [11] propose the majority rule in the fusion center to nullify the effects of the malicious users. In [5], an effective weighted combining method is proposed to reduce the impact of false information. In [12], a defense scheme is proposed that computes suspicious levels and trust values of the users. In our previous work [13], malicious user detection based on outlier energy detection techniques is proposed and a filtering method is used based on statis- tical parameters of sensing results to eliminate the effects of malicious users. In this paper, we propose a two step secure spectrum sensing algorithm. At first, based on the statistical parameters of SUs sensing results, a pre-filter is designed to remove the sensing results of the secon- dary users which are far from the others. Then, trust weighted values are assigned to the users whose sensing results are passed from the filter based on the fuzzy logic. Finally, a weighted combining method is proposed to make final decision in the fusion center.
Show more

7 Read more

Enhancing Networks Lifetime Using Two-Step Uniform Clustering Algorithm (TSUC) Withdrawal Article

Enhancing Networks Lifetime Using Two-Step Uniform Clustering Algorithm (TSUC) Withdrawal Article

Wireless sensor networks have enticed lot of spotlight from re- searchers all around globe, owing to its wide applications in in- dustrial, military and agricultural fields. Energy conservation and node deployment strategies play a paramount role for effective execution of Wireless Sensor Networks. Clustering of nodes in the wireless sensor networks is an approach commenced to achieve energy efficiency in the network. Clustering algorithm, if not executed properly can reduce life of the network. In this pa- per, a Two -Step Uniform Clustering (TSUC) algorithm has been proposed with the aim to provide connectivity to the nodes in every part of the network. This algorithm increases networks life- time and throughput by re-clustering isolated nodes rather than providing them connectivity by already connected node. Results obtained after simulation showed that proposed TSUC algorithm performed better than the other existing clustering algorithm.
Show more

15 Read more

An implicit algorithm for two finite families of nonexpansive maps in hyperbolic spaces

An implicit algorithm for two finite families of nonexpansive maps in hyperbolic spaces

are vigorously analyzed for approximation of fixed points of various maps under sui- table conditions imposed on the control sequences. The algorithm (i) exhibits weak convergence even in the setting of Hilbert space. Moreover, Chidume and Mutanga- dura [20] constructed an example for Lipschitz pseudocontractive map with a unique fixed point for which the algorithm (i) fails to converge.

12 Read more

Tutorial on EM Algorithm

Tutorial on EM Algorithm

Maximum likelihood estimation (MLE) is a popular method for parameter estimation in both applied probability and statistics but MLE cannot solve the problem of incomplete data or hidden data because it is impossible to maximize likelihood function from hidden data. Expectation maximum (EM) algorithm is a powerful mathematical tool for solving this problem if there is a relationship between hidden data and observed data. Such hinting relationship is specified by a mapping from hidden data to observed data or by a joint probability between hidden data and observed data. In other words, the relationship helps us know hidden data by surveying observed data. The essential ideology of EM is to maximize the expectation of likelihood function over observed data based on the hinting relationship instead of maximizing directly the likelihood function of hidden data. Pioneers in EM algorithm proved its convergence. As a result, EM algorithm produces parameter estimators as well as MLE does. This tutorial aims to provide explanations of EM algorithm in order to help researchers comprehend it. Moreover some improvements of EM algorithm are also proposed in the tutorial such as combination of EM and third-order convergence Newton-Raphson process, combination of EM and gradient descent method, and combination of EM and particle swarm optimization (PSO) algorithm.
Show more

38 Read more

Prevention of Unauthorized Image Tempering and          Secure Data Transmission using Integrated
          Stegowater Algorithm

Prevention of Unauthorized Image Tempering and Secure Data Transmission using Integrated Stegowater Algorithm

Chen proposed a method to hide data in the coefficients of high frequency domain resulted from DWT, in which the low frequency coefficients are unaltered [8]. Some basic pre-processing steps are applied before embedding the data. Author divided the method in two modes and three cases. The modes are fixed and varying. The cases are low embedding capacity, medium capacity and high capacity. Sequence mapping tables are used in raster scan manner to embed the data. Extraction is just reverse of the embedding processing. Authors show the results in form of stego image, capacity and PSNR on six different images. For fix mode, 46.83 dB is the highest PSNR value and 39.00 dB is lowest PSNR value. For varying mod highest PSNR is 45.85 dB and lowest is 40.76 dB.
Show more

5 Read more

Two steps forward, one step back: current harm reduction policy and politics in the United States

Two steps forward, one step back: current harm reduction policy and politics in the United States

Even as the United States emerged as the global pioneer in legalizing and regulating cannabis, it lags well behind much of western Europe and other regions in embracing harm reduction policies regarding other illicit drugs. Pol- icies vary greatly among US states and even among cities within the same state, thereby making it difficult to generalize about the country as a whole, but some trends are apparent: spreading support for legalizing syringe access, even in relatively conservative parts of the country; rapid expansion of programs and policies to reduce overdose fatalities; growing law enforcement interest in harm reduction approaches to policing drug users and markets; and, belatedly, support for initiating legal drug consumption rooms in a few of the more politically progressive cities. These encouraging develop- ments, it must be stressed, have occurred in a country in which the drug war lumbers on notwithstanding wide- spread disillusionment with its persistent failures, and that mostly lacks the sorts of social safety nets that cush- ion the harms of drug misuse and prohibitionist policies in other economically advanced nations.
Show more

7 Read more

Robust Activity Recognition Combining Anomaly Detection and Classifier Retraining

Robust Activity Recognition Combining Anomaly Detection and Classifier Retraining

Abstract—Activity recognition systems based on body-worn motion sensors suffer from a decrease in performance during the deployment and run-time phases, because of probable changes in the sensors (e.g. displacement or rotatation), which is the case in many real-life scenarios (e.g. mobile phone in a pocket). Existing approaches to achieve robustness tend to sacrifice information (e.g. by rotation-invariant features) or reduce the weight of the anomalous sensors at the classifier fusion stage (adaptive fusion), ignoring data which might still be perfectly meaningful, although different from the training data. We propose to use adaptation to rebuild the classifier models of the sensors which have changed position by a two-step approach: in the first step, we run an anomaly detection algorithm to automatically detect which sensors are delivering unexpected data; subsequently, we trigger a system self-training process, so that the remaining classifiers retrain the “anomalous” sensors. We show the benefit of this approach in a real activity recognition dataset comprising data from 8 sensors to recognize locomotion. The approach achieves similar accuracy compared to the upper baseline, obtained by retraining the anomalous classifiers on the new data.
Show more

6 Read more

Two-Step Authentication FAQ

Two-Step Authentication FAQ

authentication, it is much more difficult for someone to impersonate you online. This step will help to protect direct deposit information, research, intellectual property and faculty, staff and student personal information. Stanford will likely require additional measures in the near future, including password strength requirements, upgrades/replacements of old operating systems such as Windows XP and encryption of laptops and mobile devices.

5 Read more

Ciprofloxacin: A Two Step Process

Ciprofloxacin: A Two Step Process

Ciprofloxacin (1) is used to treat a wide variety of infections. Ciprofloxacin is broad based antibiotic of the fluoroquinoline class. It is active against both Gram-positive and Gram-negative bacteria (Figure 1). It functions by inhibiting DNA gyrase and a type-II and type-IV topoisomerases necessary to separate bacterial DNA, there by inhibiting cell division. The drug was invented by Bayer in 1983 and introduced in 1987. It became a blockbuster drug and sales reached two billion Euros in 2001. Now generics are introduced (after 2014). It is included in the essential medicinal list of WHO (2015).
Show more

5 Read more

Feature Selection Algorithm for High Dimensional Data using Fuzzy Logic

Feature Selection Algorithm for High Dimensional Data using Fuzzy Logic

It used six real world dataset from the UCI repository have been used. Three of them have classification Problem with discrete features, the next two classifications with discrete and continuous features, and the last one is approximation problem. The learning algorithm is used to check the quality of feature selected are a classification and regression tree layer with pruning. This process and algorithms is implemented by the orange data mining System. Overall, the non-parametric tests, namely the Wilcox on and Friedman test are suitable for our problems. They are appropriate since they assume some, but limited commensurability. They are safer than parametric tests since they do not assume normal distributions or homogeneity of variance. There is an alternative opinion among statisticians that significance tests should not be per-formed at all since they are often misused, either due to misinterpretation or by putting too much stress on their results. The main disadvantage of the system is it measure to low accuracy of the search process. c) Feature Clustering and Mutual Information for the
Show more

11 Read more

A. Algorithm for estimation of the calibrating grid step

A. Algorithm for estimation of the calibrating grid step

A. Algorithm for the contact area projection calculation The proposed algorithm for the contact area calculation is based on the analysis of the residual scratch trace in its part corresponding to the moment of removing the indenter away from surface. The algorithm operation is based on building the set of sections starting from the conditional point of indentor apex contact with the material. The point belonging to the contact area boundary is defined as a maximum point z(x, y) along each section. Next step is estimation of array of z(x, y) values corresponding to the contact area and calculation of the area projected on XY plane. The projected area is calculated as a number of points multiplied by the scanning grid cell area. The calculated projected area value is used to evaluate the corrected hardness by scratch test. Fig. 8 shows the algorithm schema.
Show more

6 Read more

A step counting hill climbing algorithm

A step counting hill climbing algorithm

A single-parameter local search metaheuristic called late acceptance hill climbing algorithm (LAHC) was proposed by Burke and Bykov (2008). The main idea of LAHC is to compare in each iteration a candidate solution with the solution that has been chosen to be the current one several iterations before and to accept the candidate if it is better. The number of the backward iterations is the only LAHC parameter referred to as “history length”. An extensive study of LAHC was carried in (Burke and Bykov 2012) where the salient properties of the method have been discussed. First, its total search/convergence time was proportional to the history length, which was essential for its practical use. Also, it was found that despite apparent similarities with other local search metaheuristics such as simulated anneal- ing (SA) and great deluge algorithm (GDA), LAHC had the underlying distinction, namely it did not require a guiding mechanism like, for example, cooling schedule in SA. This provided the method with effectiveness and reliability. It was demonstrated that LAHC was able to work well in situa- tions where the other two heuristics failed to produce good results.
Show more

14 Read more

Solid State Drives And The Sort-Merge

Solid State Drives And The Sort-Merge

Considering all of the foregoing, the point to be recognized is that a sort-merge optimized to run in a spinning disk environment will probably have used the trade-off between seek time and transmission time and employed at least a two-step merge, like that demonstrated by Folk and Zoellick. IBM (2012b) implies a similar approach in their DFSORT utility when they acknowledge "The Blockset technique might require more intermediate work space than Peerage/Vale [another of IBM's sorting techniques]." In the analysis and comparisons below, the 2-Step Sort-merge is the HDD standard for sorting large files.
Show more

6 Read more

Application of static slicer 1 01 for static slicing

Application of static slicer 1 01 for static slicing

The above algorithm describes the overall flow of the slicing algorithm technique. Initially a file is input and creates control flow graph nodes. After creating the nodes set the line numbers of file to node. The root node and slicing criterion C= (L, V) where L is the line number and V is the slicing variable for example (10, {x}) is initialized. Then initialize the variable of criterion i.e. x. In the file, the first line is initialized as start node. The number of nodes is count and set the criterion to start node. The nodes are stored in control flow graph as binary search tree. To store slice point a vector is created. Then find the child node and its predecessor and successor node. In the algorithm add variables. If in the algorithm, variable is not there then there is no slice point and the process is end there. If variable is there, then get the slice criterion and calculate the relevant variable and store the variable in a vector. It is based on the criterion. If the criterion is null then the process is end there. If the criterion is there then find the node by using the line number. After finding the node get slice number from control flow graph by using recursion. Child node is derived from the root node. If the child node is greater than zero then get slicing criterion. If the child node is less than zero then displays the line numbers or slicing point and the process is end there. Slicing criterion is found out and then gets the line numbers or node. If node is in branch then find the line number and check the influence variable. If influence variable is equal to criterion and variable vector then find the line number and the process is repeated.
Show more

5 Read more

Two-Step Four-Electron Reduction of Molecular Oxygen at Iodine-Adatoms-Modified Gold Electrode in Alkaline Media

Two-Step Four-Electron Reduction of Molecular Oxygen at Iodine-Adatoms-Modified Gold Electrode in Alkaline Media

The second consecutive peak current appearing at ca. 0.83 V corresponding to the further reduction of the electrogenerated HO 2  to OH  was thus increased by ca. 12 times. The two-step four-electron ORR is a well-known phenomenon, but the electrogeneration of HO 2  as a stable intermediate at metallic electrodes is hardly observed because of its rapid disproportionation reaction, which is very promptly catalyzed by the electrode surfaces themselves. The in situ electrogeneration of stable HO 2  using the I (das) Au electrode could be useful in many electrochemical and chemical processes. Bromide
Show more

14 Read more

A Comparative Study on using Principle Component Analysis with different Text Classifiers

A Comparative Study on using Principle Component Analysis with different Text Classifiers

K-Nearest Neighbor (KNN): The KNN is one of the simplest lazy classification algorithm [26] [27] and it is also well known as instance-based learning. The KNN classifier is based on the as- sumption that the classification of an instance is most similar to the classification of other instances that are nearby in the vector space. To categorize an unknown document d, the KNN classi- fier ranks the documents among training set and tries to find its k-nearest neighbors, which forms a neighborhood of d. Then ma- jority voting among the categories of documents in the neighbor- hood is used to decide the class label of d. KNN is an instance- based where the function is only approximated locally and all computation is adjourned until classification. KNN is used in many applications because of its effectiveness, non-parametric and it is easy to be implemented. However, when we use KNN, the classification time is very long and it is not easy to find opti- mal value of k. Generally, the best alternative of k to be chosen depends on the data. Also, the effect of noise on the classification is reduced by the larger values of k but make boundaries between the classes less distinct. By using various heuristic techniques, a good ’k’ can be selected.
Show more

6 Read more

Show all 10000 documents...