Learning Vector Quantization

Top PDF Learning Vector Quantization:

Differential privacy for learning vector quantization

Differential privacy for learning vector quantization

Prototype-based machine learning methods such as learning vector quantization (LVQ) offer flexible classification tools, which represent a classification in terms of typical pro- totypes. This representation leads to a particularly intuitive classification scheme, since prototypes can be inspected by a human partner in the same way as data points. Yet, it bears the risk of revealing private information included in the training data, since indi- vidual information of a single training data point can significantly influence the location of a prototype. In this contribution, we investigate the question how to algorithmically extend LVQ such that it provably obeys privacy constraints as offered by the notion of so-called differential privacy. More precisely, we demonstrate the sensitivity of LVQ to single data points and hence the need of its extension to private variants in case of possi- bly sensitive training data. We investigate three technologies which have been proposed in the context of differential privacy, and we extend these technologies to LVQ schemes.
Show more

24 Read more

Average Competitive Learning Vector Quantization

Average Competitive Learning Vector Quantization

if γ k = o( 1 k ) SΣ + ΣS t = V. (3) 2.3. The Average Competitive Learning Vector Quantization The CLVQ only changes one quantizer at each iteration, and thus needs a large number of simulations to achieve good results. Our proposal is a mod- ification of the competitive phase of the CLVQ method. This new procedure introduce a sort of small Lloyd in this phase. The ACLVQ generates a set of N random vectors ξ instead of one as the CLVQ method do. Our proposal requires less time to achieve quantization errors of the same order despite it use the same number of simulations needed by the CLVQ. The underlying idea could be described as follows
Show more

19 Read more

Statistical physics of learning vector quantization

Statistical physics of learning vector quantization

Neural Networks Research Centre, Helsinki: 2002, Bibliography on the self-organizing maps (SOM) and learning vector quantization (LVQ), Otaniemi: Helsinki Univ. of Technology . Available on-line: http://liinwww.ira.uka.de/bibliography/Neural/SOM.LVQ.html . Opper, M.: 1994, Learning and generalization in a two-layer neural network: The role of the

7 Read more

Functional relevance learning in generalized learning vector quantization

Functional relevance learning in generalized learning vector quantization

a b s t r a c t Relevance learning in learning vector quantization is a central paradigm for classification task depending feature weighting and selection. We propose a functional approach to relevance learning for high-dimensional functional data. For this purpose we compose the relevance profile by a superposition of only a few parametrized basis functions taking into account the functional character of the data. The number of these parameters is usually significantly smaller than the number of relevance weights in standard relevance learning, which is the number of data dimensions. Thus, instabilities in learning are avoided and an inherent regularization takes place. In addition, we discuss strategies to obtain sparse relevance models for further model optimization.
Show more

12 Read more

Divergence-based classification in learning vector quantization

Divergence-based classification in learning vector quantization

Technikumplatz 17, 09648 Mittweida, Germany Abstract We discuss the use of divergences in dissimilarity based classification. Diver- gences can be employed whenever vectorial data consists of non-negative, po- tentially normalized features. This is, for instance, the case in spectral data or histograms. In particular, we introduce and study Divergence Based Learning Vector Quantization (DLVQ). We derive cost function based DLVQ schemes for the family of γ-divergences which includes the well-known Kullback-Leibler divergence and the so-called Cauchy-Schwarz divergence as special cases. The corresponding training schemes are applied to two different real world data sets.
Show more

13 Read more

Hankel matrices for use in Learning Vector Quantization

Hankel matrices for use in Learning Vector Quantization

In chapter 2, I presented two LVQ methods for classification. At first, I explained Generalized learning vector quantization (GLVQ) introduced by Sato and Yamada[22]. Then, I described the median variant of GLVQ which is introduced by D. Nebel[12]. In chapter 3, I combined the topics in chapter 1 and 2. From chapter 1, I can construct feature vectors, and then I can use the classification approaches, explained in chapter 2, to learn a classifier. In this chapter, I proposed two procedures for classification task.

63 Read more

Divergence-based classification in learning vector quantization

Divergence-based classification in learning vector quantization

a b s t r a c t We discuss the use of divergences in dissimilarity-based classification. Divergences can be employed whenever vectorial data consists of non-negative, potentially normalized features. This is, for instance, the case in spectral data or histograms. In particular, we introduce and study divergence based learning vector quantization (DLVQ). We derive cost function based DLVQ schemes for the family of g -divergences which includes the well-known Kullback–Leibler divergence and the so-called Cauchy–Schwarz divergence as special cases. The corresponding training schemes are applied to two different real world data sets. The first one, a benchmark data set (Wisconsin Breast Cancer) is available in the public domain. In the second problem, color histograms of leaf images are used to detect the presence of cassava mosaic disease in cassava plants. We compare the use of standard Euclidean distances with DLVQ for different parameter settings. We show that DLVQ can yield superior classification accuracies and Receiver Operating Characteristics.
Show more

8 Read more

Generalized functional relevance learning vector quantization

Generalized functional relevance learning vector quantization

2 - CITEC - Faculty of Technology, Bielefeld University, 33594 Bielefeld, Germany 3 - Johann Bernoulli Institute for Mathematics and Computer Science, University of Groningen, The Netherlands Abstract. Generalized learning vector quantization (GRLVQ) is a proto- type based classification algorithm with metric adaptation weighting each data dimensions according to their relevance for the classification task. We present in this paper an extension for functional data, which are usually very high dimensional. This approach supposes the data vectors have to be functional representations. Taking into account, these information the so-called relevance profile are modeled by superposition of simple basis functions depending on only a few parameters. As a consequence, the resulting functional GRLVQ has drastically reduced number of parame- ters to be adapted for relevance learning. We demonstrate the ability of the new algorithms for standard functional data sets using different basis functions, namely Gaussians and Lorentzians.
Show more

6 Read more

Generalized functional relevance Learning Vector Quantization

Generalized functional relevance Learning Vector Quantization

2 - CITEC - Faculty of Technology, Bielefeld University, 33594 Bielefeld, Germany 3 - Johann Bernoulli Institute for Mathematics and Computer Science, University of Groningen, The Netherlands Abstract. Generalized learning vector quantization (GRLVQ) is a proto- type based classification algorithm with metric adaptation weighting each data dimensions according to their relevance for the classification task. We present in this paper an extension for functional data, which are usually very high dimensional. This approach supposes the data vectors have to be functional representations. Taking into account, these information the so-called relevance profile are modeled by superposition of simple basis functions depending on only a few parameters. As a consequence, the resulting functional GRLVQ has drastically reduced number of parame- ters to be adapted for relevance learning. We demonstrate the ability of the new algorithms for standard functional data sets using different basis functions, namely Gaussians and Lorentzians.
Show more

7 Read more

Adaptive Relevance Matrices in Learning Vector Quantization

Adaptive Relevance Matrices in Learning Vector Quantization

6 Discussion We have proposed a new metric learning scheme for LVQ classifiers which al- lows to adapt a full matrix according to the given classification task. This scheme extends the successful relevance learning vector quantization algorithm such that correlations of dimensions can be accounted for during training. The learning scheme can be derived directly as a stochastic gradient of the GLVQ cost function such that convergence and flexibility of the original GLVQ is pre- served. Since the resulting classifier is represented by prototype locations and matrix parameters, the results can be interpreted by humans: Prototypes show typical class representatives and matrix parameters reveal the importance of input dimensions for the diagonal elements and the importance of correlations for the off-diagonal elements. Local as well as global parameters can be used, i.e. relevance terms which contribute to a good description of single classes or the global classification, respectively, can be identified. The efficiency of the model has been demonstrated in several application scenarios, demonstrating impressively the increased capacity of local matrix adaptation schemes. Inter- estingly, local matrix learning obtains a classification accuracy which is similar to the performance of the SVM in several cases, while it employs less complex classification schemes and maintains intuitive interpretability of the results.
Show more

26 Read more

Spectral Regularization in Generalized Matrix Learning Vector Quantization

Spectral Regularization in Generalized Matrix Learning Vector Quantization

Millennium Institute of Astrophysics, Chile Email: pestevez@ing.uchile.cl Abstract—In this contribution we propose a new regular- ization method for the Generalized Matrix Learning Vector Quantization classifier. In particular we use a nuclear norm in order to prevent oversimplifying/over-fitting and oscillatory behaviour of the small eigenvalues of the positive semi-definite relevance matrix. The proposed method is compared with two other regularization methods in two artificial data sets and a real- life problem. The results show that the proposed regularization method enhances the generalization ability of GMLVQ. This is re- flected in a lower classification error and a better interpretability of the relevance matrix.
Show more

7 Read more

Pengenalan Aksara Jawamenggunakan Learning Vector Quantization (Lvq)

Pengenalan Aksara Jawamenggunakan Learning Vector Quantization (Lvq)

--- , dan Taufiq Hidayat (2006). “Implementasi Learning Vector Quantization (LVQ) Untuk Pengenal Pola Sidik Jari Pada Sistem Informasi Narapidana LP Wirogunan” , dalam http://journal.uii.ac.id/index.php/media- informatika/article/view/121/82. diakses tanggal 13 Desember 2010 ---, dan Eko Sri Wahyono (2009). “Identifikasi Nomor Polisi Mobil Menggunakan Metode Jaringan Saraf Buatan Learning Vector Quantization”, dalam http://www.gunadarma.ac.id/library/articles/graduate/industrial- technology/2009/Artikel_50405248.pdf. diakses tanggal 13 Desember 2010
Show more

10 Read more

Improving learning vector quantization using data  reduction

Improving learning vector quantization using data reduction

Learning Vector Quantization (LVQ) is a supervised learning algorithm commonly used for statistical classification and pattern recognition. The competitive layer in LVQ studies the input vectors and classifies them into the correct classes. The amount of data involved in the learning process can be reduced by using data reduction methods. In this paper, we propose a data reduction method that uses geometrical proximity of the data. The basic idea is to drop sets of data that have many similarities and keep one representation for each set. By certain adjustments, the data reduction methods can decrease the amount of data involved in the learning process while still maintain the existing accuracy. The amount of data involved in the learning process can be reduced down to 33.22% for the abalone dataset and 55.02% for the bank marketing dataset, respectively.
Show more

12 Read more

Image Contrast Enhancement using Learning Vector Quantization

Image Contrast Enhancement using Learning Vector Quantization

Architecture of Learning Vector Quantization is similar to the architecture of Kohonan’s Self Organization Map (SOM). Here in the following diagram input is represented by x and output is represented by y. Number of inputs are n and number of output are m. In between input layer and output layer we apply weight which is used to adjust the training network to control the output so the actual output will matched with the desired output.

7 Read more

Differentiable Kernels in Generalized Matrix Learning Vector Quantization

Differentiable Kernels in Generalized Matrix Learning Vector Quantization

We show that the concept of differentiable kernels allows a prototype description in the data space but equipped with the kernel metric. Moreover, using the visualization properties of the original matrix learning vector quantization we are able to optimize the class visualization by inherent visualization mapping learning also in this new kernel-metric data space.

7 Read more

Membuat Pixel Art Menggunakan Learning Vector Quantization

Membuat Pixel Art Menggunakan Learning Vector Quantization

Learning Vector Quantization untuk melakukan pembelajaran kepada codebook vectors berbasis 5- dimensi ciri. Produknya adalah sebuah proses pengenalan keanggotaan titik piksel dalam sebuah bidang citra berdasarkan informasi spasial dan warna sebuah piksel, sehingga membentuk voronoi area berdasarkan referensi objek citra yang akan di- segmentasi. Topik yang diusulkan adalah bentuk dari perluasan ekspresi dari seni tradisional sebagai ungkapan kreasi baru bentuk seni, yang dapat dipahami dengan mudah dengan menggunakan sarana teknologi pengolahan citra digital.
Show more

9 Read more

Convergence of Distributed Asynchronous Learning Vector Quantization Algorithms

Convergence of Distributed Asynchronous Learning Vector Quantization Algorithms

The analysis of parallel stochastic gradient procedures in a machine learning context has re- cently received a great deal of attention (see for instance Zinkevich et al. 2009 and McDonald et al. 2010). In the present paper, we go further by introducing a model that brings together the original CLVQ algorithm and the comprehensive theory of asynchronous parallel linear algorithms devel- oped by Tsitsiklis (1984), Tsitsiklis et al. (1986) and Bertsekas and Tsitsiklis (1989). The resulting model will be called distributed asynchronous learning vector quantization (DALVQ for short). At a high level, the DALVQ algorithm parallelizes several executions of the CLVQ method concurrently on different processors while the results of these algorithms are broadcast through the distributed framework asynchronously and efficiently. Here, the term processor refers to any computing in- stance in a distributed architecture (see Bullo et al. 2009, chap. 1, for more details). Let us remark that there is a series of publications similar in spirit to this paper. Indeed in Frasca et al. (2009) and in Durham et al. (2009), a coverage control problem is formulated as an optimization problem where the functional cost to be minimized is the same of the quantization problem stated in this manuscript.
Show more

36 Read more

Pengenalan Wajah Menggunakan Learning Vector Quantization (Lvq)

Pengenalan Wajah Menggunakan Learning Vector Quantization (Lvq)

F.12. Pengenalan Wajah Menggunakan Learning Vector Quantization (LVQ) (S Heranurweni) Setelah proses segmentasi adalah proses pengenalan citra, yang berguna untuk mengenali dan mendeteksi bentuk citra masukan dan citra target. Citra target di sini merupakan citra yang benar-benar mudah dikenali. Sedangkan citra masukan berupa citra yang akan dibandingkan dengan citra target tersebut. Jika citra masukan atau citra yang akan dibandingkan mempunyai pola yang hampir sama dengan citra target, maka citra masukan dapat dikenali atau dianggap normal.

9 Read more

Spoken Word Recognition Using MFCC and Learning Vector Quantization

Spoken Word Recognition Using MFCC and Learning Vector Quantization

E. Learning Vector Quantization Learning Vector Quantization (LVQ) is a supervised version of vector quantization that can be used when we have each input data with class label. This learning technique uses the class information to reposition the Voronoi vectors slightly to improve the quality of the classifier decision regions, which adapted from Kohenen Map. It is a two stage process of LVQ as shown Fig. 3. Input of LVQ is Ceptrum result that 198 point or number of n.

6 Read more

Learning Vector Quantization: generalization ability and dynamics of competing prototypes

Learning Vector Quantization: generalization ability and dynamics of competing prototypes

Abstract. Learning Vector Quantization (LVQ) are popular multi-class classification algorithms. Prototypes in an LVQ system represent the typical features of classes in the data. Frequently multiple prototypes are employed for a class to improve the representation of variations within the class and the generalization ability. In this paper, we investigate the dynamics of LVQ in an exact mathematical way, aiming at understanding the influence of the number of prototypes and their assignment to classes.

11 Read more

Show all 10000 documents...