modified kernel

Top PDF modified kernel:

Modified Kernel Functions by Geodesic Distance

Modified Kernel Functions by Geodesic Distance

SVM is a linear classifier in the parameter space, but it is easily extended to a nonlinear classifier of the φ-machine type by mapping the space S = { x } of the input data into a high-dimensional feature space F = { φ(x) } . By choos- ing an appropriate mapping φ, the data points become lin- early separable or nearly linearly separable in the high- dimensional space so that one can easily apply the struc- ture risk minimization. Instead of computing the mapped patterns φ(x) explicitly, we only need the dot products be- tween the mapped patterns. They are directly available from the kernel function which integrates φ(x). By choosing dif- ferent kinds of kernels, the SVM can realize radial basis func- tion (RBF) and polynomial and multilayer perceptron clas- sifiers. Compared with the traditional way of implementing them, the SVM has an extra advantage of automatic model selection, in the sense that both the optimal number and the
Show more

7 Read more

Kernel Nearest Neigh-Bour Based Genetic Algorithm And Modified Kernel-Based Fuzzy C-Means Based MRI Image Brain Tumor Segmentation And Classification

Kernel Nearest Neigh-Bour Based Genetic Algorithm And Modified Kernel-Based Fuzzy C-Means Based MRI Image Brain Tumor Segmentation And Classification

Abstract: The recognition of a mind cancer and its grouping from present day image modalities is an essential worry, yet a tedious and dreary work was achieved by radiologists or medical bosses. The precision of identification and tumor stages grouping done by radiologists is relied upon their knowledge just, so the PC supported innovation is imperative to help with the conclusion exactness. In this research, a tumor portion is segmented from the brain image by using Adjusted Kernel-Based Fuzzy C-Means (MKFCM) algorithm. The input images are resized like 256×256 in the pre - processing stage. The pre-processed MRI image segmented by MKFCM, which is a stretchy top ML system to locate the object in a difficult pattern. Next, Hybrid Feature Extraction (HFE) performed on the segmented image to increase the feature subsets. The feature selection (FS) process was performed by Kernel Nearest Neighbour (KNN) based Genetic Algorithm (GA) in order to acquire the best feature values. The best feature values given to the Naive Bayes (NB) classifier as an input, which is classified into Meningioma, Glioma and Pituitary regions in the MRI images. The performance of proposed KNN-GA- FS-NB method is validated by T1-WCEMRI dataset.
Show more

6 Read more

Optimal error bound and modified kernel method for a space fractional backward diffusion problem

Optimal error bound and modified kernel method for a space fractional backward diffusion problem

In this paper, we consider a backward problem for a space-fractional diffusion equation. This problem is ill-posed, i.e., the solution does not depend continuously on the data. The optimal error bound for the problem under a source condition is analyzed. Based on the idea of modified ‘kernel’, a regularization method is constructed, and the convergence estimates are obtained under a priori

16 Read more

A modified kernel method for a time fractional inverse diffusion problem

A modified kernel method for a time fractional inverse diffusion problem

In this paper, we propose a modified kernel method for solving time-fractional inverse diffusion problem by producing a stable approximation solution. For this regularization strategy, in the presence of noisy data, we establish and prove the convergence estimates for the cases  ≤ x <  under the a priori bound assumptions for the exact solution and the suitable choices of the regularization parameter. From the results of numerical simu- lations, it seems clear that the proposed method works well for the model problem with small measurement error.

11 Read more

Reproducing kernel Hilbert space method for cox proportional hazard model

Reproducing kernel Hilbert space method for cox proportional hazard model

The research work is started with the exploration of the RKHS and its properties. Upon understanding all aspects of reproducing kernel Hilbert space, a new RKHS is constructed and the properties that classify the kernel as RKHS is shown. Once the new kernel is constructed, an initial exploratory data analysis is performed by using the negative partial log likelihood function as the loss function. This process is done by minimizing the loss function to find the optimal parameter values of survival data for the kernel method. The Newton-Raphson method is used to solve the optimization problem. Survival data of HIV positive patients from a public hospital is used in the application of the new modified kernel method. The exponential values of the kernel model will be observed to estimate the survival of patients.
Show more

33 Read more

WRL 98 6 pdf

WRL 98 6 pdf

Compared to Figure 10 (for NetCache with caching disabled), Figure 12 suggests that our kernel changes have a similar effect on CPU time consumption whether or not caching is enabled; this is probably because the caching component of NetCache uses relatively little CPU time. This conclusion is supported by the linear regressions shown in Table 9, but since the use of caching seems to reduce the CPU-time efficiency of the entire system, the slopes are considerably steeper than they are in Table 4 (for NetCache with caching disabled). With our modified kernel and NetCache, the idle-time X-intercept with caching disabled is at 69 requests/sec, but drops to 49 requests/sec when caching is enabled. (Remember that the X-intercept is not a good predictor of the actual peak request rate, as is clear from Figure 12.)
Show more

52 Read more

Improving Density Estimation by Incorporating Spatial Information

Improving Density Estimation by Incorporating Spatial Information

Figure 2: This is a model-validating example with dense data set of 8000 events. The piecewise-constant true density is given in (a), and the valid region is provided in (b). The sampled events are shown in (c). (d) and (e) show the two current density estimation methods, Kernel Density Estimation and TV MPLE. (f), (g), and (h) show the density estimates from our methods. The color scale represents the relative probability of an event occurring in a given pixel. The images are 80 pixels by 80 pixels.

12 Read more

Kernel Mean Shrinkage Estimators

Kernel Mean Shrinkage Estimators

To demonstrate the leave-one-out cross-validation procedure, we conduct similar exper- iments in which the parameter λ is chosen by the proposed LOOCV procedure. Figure 5 depicts the percentage of improvement (with respect to the empirical risk of the KME 4 ) as we vary the sample size and dimension of the data. Clearly, B-KMSE, R-KMSE and S-KMSE outperform the standard estimator. Moreover, both R-KMSE and S-KMSE tend to outperform the B-KMSE. We can also see that the performance of S-KMSE depends on the choice of kernel. This makes sense intuitively because S-KMSE also incorporates the eigen-spectrum of K, whereas R-KMSE does not. The effects of both sample size and data dimensionality are also transparent from Figure 5. While it is intuitive to see that the improvement gets smaller with increase in sample size, it is a bit surprising to see that we can gain much more in high-dimensional input space, especially when the kernel function is non-linear, because the estimation happens in the feature space associated with the kernel function rather than in the input space. Lastly, we note that the improvement is more substantial in the “large d, small n” paradigm.
Show more

41 Read more

Structural Evaluation of the Effect of Pulverized Palm Kernel Shell (PPKS) on Cement-Modified Lateritic Soil Sample

Structural Evaluation of the Effect of Pulverized Palm Kernel Shell (PPKS) on Cement-Modified Lateritic Soil Sample

Domestic and industrial wastes are generated every day and in large quantities and the safe disposal of these waste materials are increasingly becoming a major concern around the world [1, 2, 3]. Palm Kernel Shell (PKS) is regarded as a waste from oil processing [4, 5]. It has been shown that approximately 15 to 18 tonnes of fresh fruit bunches are produced per hectare per year and PKS comprises about 64% of the bunch mass [6, 7]. It is observed that in developing countries, Nigeria inclusive, waste PKS is either burnt to supply energy at palm oil mills or left in piles to compost.
Show more

7 Read more

Effects of some kernel factors on palm kernel oil extraction using a screw press

Effects of some kernel factors on palm kernel oil extraction using a screw press

p-value of 0.3110 indicating that the contribution of moisture content (within the range of 3% to 10% w.b.) to changes in PKO yield for samples heated at 130 o C for 10 min was not significant at 5% significance level (p > 0.05). The treatment mean yield was 179.00 ± 19.20 mL/500 g-kernels. The plot of PKO yield versus kernel moisture content is shown in Figure 1. This figure shows that kernels at 5% moisture content (w.b.) gave the highest mean PKO yield of 190.00 ± 8.59 mL/500 g- kernels. Comparison of treatment means shows that the differences in mean yields for 5%, 7%, and 10% w.b., kernel samples were not significant, but was significant for 3% and 5% w.b., samples. The figure equally shows that the decrease in KMC produced increments in PKO yield except for KMC of 3% w.b., where oil loss occurred in the heating vessel before the actual oil extraction in the screw press.
Show more

6 Read more

Group sequential testing of homogeneity in finite mixture models.

Group sequential testing of homogeneity in finite mixture models.

some of the parameters are known. Liang and Rathouz (1999) define a score function which is sensitive toward a given alternative. This method also has a nice m athemat­ ical and statistical properties through choice of the alternative, which is somewhat an a rb itra ry choice. Chen et al. (2001) propose a modified likelihood ratio test (M L R T ) for homogeneity in the fin ite m ixture models. The M L R T provides a nice solution to this situation by sim ply adding a penalty term to the log-likelihood function. The lim itin g d istrib u tio n of the M L R T statistic is a m ixture of chi-squared d istrib u tio n for a large variety of m ixture models. In addition, it is asym ptotically most power­ fu l under the local alternative models when there are no structural parameters (i.e., nuisance parameters).
Show more

84 Read more

1.
													Feature based growth rate analysis and  pattern recognition of mesenchymal stem cells

1. Feature based growth rate analysis and pattern recognition of mesenchymal stem cells

Abstract - Stem cell is a generic cell that can make exact copies of it. Stem cells are master cells and can grow into different cell types in the body during evolution. Different parts of your body are made up of different kinds of cells. Stem cells are not specific to one area in the body and can turn into skin, bone, blood or brain cells. Occasionally stem cells do not build up accurately once injecting to human body, which will lead to complicated. To avoid this problem the growth of the stem cell should be monitored with the help of morphological and textural characteristics. Meanwhile, with the rapid development of feature extraction and pattern recognition techniques, it provides people with new thought on the study on complex image retrieval while it’s very difficult for traditional machine learning method to get ideal retrieval results. Pattern Recognition is one of the very important and actively searched trait or branch of artificial intelligence. It is the science which tries to make machines as intelligent as human to recognize patterns and classify them into desired categories in a simple and reliable way. For this reason, proposed a new approach for image features based on Modified Non Subsampled Contourlet Transform (MNSCT), Radial Basis Function Kernel Principal Component Analysis (RBFKPCA), and Relevance Vector Machine (RVM). The proposed method is used to extract the salient features of stem cell images by using efficiency of Modified Non Subsampled Contourlet Transform (MNSCT) that finds the best feature points. Relevance Vector Machine (RVM) is a probabilistic model whose functional form is equivalent to the SVM. It achieves comparable recognition accuracy to the SVM, yet provides a full predictive distribution, and also requires substantially fewer kernel functions.
Show more

10 Read more

C-Support Vector Classification the Estimation of the MS Subgroups Classification with Selected Kernels and Parameters

C-Support Vector Classification the Estimation of the MS Subgroups Classification with Selected Kernels and Parameters

A second important point in SVM is that the kernel use named as kernel trick in training data samples is in implicit nonlinear form. For instance, the form of data with its dot product taken (inner product) like the ones given in Figure 2 represents the kernel trick. In this way, it is possible to work by passing into different spaces in the data by setting up a linear model (right pane) instead of using nonlinear models (left pane). Data should be split initially by a simple hyperplane to be able to pass into different spaces. C-SVC algorithm was used, which is able to classify SVM data with multiple classes.
Show more

20 Read more

SINGLE SYSTEM IMAGE (SSI)

SINGLE SYSTEM IMAGE (SSI)

Cluster operating systems support an efficient execution of parallel applications in an environment shared with se- quential applications. A goal is to pool resources in a clus- ter to provide better performance for both sequential and parallel applications. To realize this goal, the operating system must support gang scheduling of parallel pro- grams, identify idle resources in the system (such as pro- cessors, memory, and networks), and offer globalized ac- cess to them. It should optimally support process migration to provide dynamic load balancing as well as fast interprocess communication for both the system- and user-level applications. The OS must make sure these fea- tures are available to the user without the need for addi- tional system calls or commands. OS kernel-supporting SSI include SCO UnixWare NonStop Clusters (Walker and Steel, 1999a, 1999b), Sun Solaris-MC (http://www. cs.umd.edu/~keleher/dsm.html), GLUnix (Ghormley et al., 1998), and MOSIX (Barak and La’adan, 1998).
Show more

12 Read more

Identification of nonlinear systems using generalized kernel models

Identification of nonlinear systems using generalized kernel models

In this paper, we extend the standard kernel modeling ap- proach. Specifically, we consider the use of a generalized kernel model for nonlinear systems, in which each kernel regressor has an individually tuned diagonal covariance matrix. Such a gen- eralized kernel regression model has the potential of enhancing modeling capability and producing sparser final models, com- pared with the standard approach of single fixed common vari- ance. The difficult issue however is how to determine these kernel covariance matrices. We note that the correlation func- tion between a kernel regressor and the training data defines the “similarity” between the regressor and the training data and it can be used to “shape” the regressor by adjusting the associated kernel covariance matrix in order to maximize the absolute value of this correlation function. A guided random search method, re- ferred to as the weighted optimization algorithm, is considered to perform the associated optimization task. This weighted op- timization algorithm has its root from boosting [20]–[23]. Since the solution obtained by this weighted optimization algorithm may depend on the initial choice of population, the algorithm is augmented into a repeated weighted optimization method to provide a robust optimization and guarantee stable “global” so- lutions regardless the initial choice of population. The determi- nation of kernel covariance matrices basically provides the pool of regressors or the full regression matrix, from which a parsi- monious subset model can be selected using a standard kernel model construction approach.
Show more

11 Read more

Reduces Solution of Fredholm Integral Equation to a System of Linear Algebraic Equation

Reduces Solution of Fredholm Integral Equation to a System of Linear Algebraic Equation

The objective of this research is to study some types of kernel of integral equations like iterated kernel, symmetric , different and Resolved kernel and to determine the Resolved kernel for Fredholm integral equation (FIE) and Volterra integral equation (VIE). It was shown that there is relation between iterated kernel and resolvent kernel as mentioned to some examples of these kernel .,also find the solution of fredholm complete neutralization for kernel and it reduces the solution of fredholm integration neutralization to a system of linear algebraic equation ,give some problem of solving system .
Show more

9 Read more

Learning the Kernel with Hyperkernels     (Kernel Machines Section)

Learning the Kernel with Hyperkernels     (Kernel Machines Section)

test various simple approximations which bound the leave one out error, or some measure of the capacity of the SVM. The notion of Kernel Target Alignment (Cristianini et al., 2002) uses the ob- jective function tr(Kyy ⊤ ) where y are the training labels, and K is from the class of kernels spanned by the eigenvectors of the kernel matrix of the combined training and test data. The semidefinite programming (SDP) approach (Lanckriet et al., 2004) uses a more general class of kernels, namely a linear combination of positive semidefinite matrices. They minimize the margin of the resulting SVM using a SDP for kernel matrices with constant trace. Similar to this, Bousquet and Herrmann (2002) further restricts the class of kernels to the convex hull of the kernel matrices normalized by their trace. This restriction, along with minimization of the complexity class of the kernel, allows them to perform gradient descent to find the optimum kernel. Using the idea of boosting, Crammer et al. (2002) optimize ∑ t β t K t , where β t are the weights used in the boosting algorithm. The class
Show more

29 Read more

Forecast of fund volatility using least squares wavelet support vector regression machines

Forecast of fund volatility using least squares wavelet support vector regression machines

The results on out-of-sample forecasting for SZSE fund volatility from the four models are given in Table 3. It shows that the four statistical metrics RMSE, MAE, LL, and LINEX of LS-WSVR1, LS-WSVR2, and LS-WSVR3 are all smaller than ones of LS-SVR, indicating that LS-WSVR are superior to LS-SVR on out-of-sample volatility forecasting ability. The main reason for the outperformance of LS-WSVR to LS-SVR is that wavelet kernel functions can approximate arbitrary objective function well due to their multi-resolution property.

6 Read more

Correlation and path coefficient analysis for yield and quality traits under organic fertilizer management in rice (Oryza sativa L.)

Correlation and path coefficient analysis for yield and quality traits under organic fertilizer management in rice (Oryza sativa L.)

A field experiment was conducted with 32 rice (Oryza sativa L.) genotypes at the wetland farm of S.V Agricultural College, Tirupati which is situated at an altitude of 182.90 m above mean sea level, 13°N latitude and 79°E longitude during Kharif 2009. Seeds of the 32 genotypes were sown in raised nursery bed and thirty days old seedlings of each genotype were transplanted by adopting a spacing of 20 cm between rows and 15 cm between plants within row in a randomized block design with three replications. Each genotype was grown in 3 rows with a plot size of 2.4 m 2 . The crop was grown with the application of FYM and Neemcake equivalent to 120 kg N ha -1 . The recommended agronomical practices and plant protection measures were followed to ensure normal crop. Five competitive plants were selected randomly from the center row of each genotype in each replication and observations were recorded for characters like, number of effective tillers per plant, plant height, panicle length, number of grains per panicle, 1000-grain weight, kernel length, kernel breadth, kernel length/breadth ratio, kernel length after cooking, kernel elongation ratio, harvest index and grain yield per plant except days to 50% flowering and days to maturity, whereas the later two characters were recorded on plot basis. Panicle and grain characters were recorded on five panicles of selected plants. Correlation analysis was computed as per Karl Pearson (1932) and the partitioning of correlation coefficient into direct and indirect effects was carried out using the procedure suggested by Dewey and Lu (1959).
Show more

7 Read more

Show all 10000 documents...