20
Registers are the fastest memory on the GPU, therefore it is important to limit the number of local variables for high performance applications. Shared memory is a read/write memory that allows memory sharing across threads in the same block. This is achieved by local DRAM blocks within each streaming multiprocessor. Overall this allows for much faster data storage without contributing to the global memory bandwidth. Global memory is a read/write memory that allows memory access by any thread. All incoming data and outgoing results must pass through global memory and as a result it is often the main bottleneck of the GPU. Another form of memory is constant memory which only allows read operations during runtime. Like global memory constant memory can be accessed by any thread, however it is highly cached making it much quicker than global memory. The final memory type is texture memory which is a special read only memory that has been optimized for texture based operations. **Using** special hardware built into the pipeline several common texture functions can be performed automatically including pixel interpolation and border wrapping. Unlike other forms of memory on the GPU, texture memory has constant access times for both cache hits and misses which allows for better scheduling and a 2D cache which gives it greater 2D spatial access.

Show more
90 Read more

Email: nagayothi.adhinayakanti@gmail.com
Abstract—
An approach to recognize a face **using** **Principal** Components **Analysis** based Genetic Algorithm in the area of computer vision is described in this paper. Face **recognition** has been one of the interesting and important research fields in the past years. Face **recognition** is a method by which a face is automatically identified. Facial image **analysis** plays an important role for human computer interaction but still now automatic face **recognition** is a big challenge for many applications. PCA is used to extract features from images with covariance **analysis** method to generate Eigen components of the images and to reduce the dimensionality. Optimization technique which is genetic algorithm gives the optimal solutions from the generated large search space. In this **analysis** we have used Japanese Female Facial Expression (JAFFE) face database with a result of approximately 96%.

Show more
So, developing such a kind of program is a difficult task in this digital world, which may involve earlier techniques,
which was used for **recognition** of faces, to make it reliable.
For face identification the starting step involves extraction of the relevant features from facial images. A big challenge is how to quantize facial features so that a computer should be able to identify a face. The study carried out by many researchers over the past several years indicates that certain facial characteristics are used by human beings to identify faces. Principle **component** analyses (PCA) is a classic tool widely used in the appearance based approaches for dimensionality reduction and feature extraction in most of the pattern **recognition** application. Hau T. Ngo et al.

Show more
This resulted in the matrix being trimmed down to a size of 5034 by 34, at which point the matrix was factorized as an SVD producing the results below.
The Σ matrix contained 30 unique singular values ( σ i ) [21], corresponding to the 30 columns in our adjusted matrix . These σ i values declined in weight as they were expected to, and only the first 13 σ i values returned coefficients above zero. This means that with the largest 13 singular values, we have the ability to predict the other 17 singular values for any log we’re given, allowing us to predict the remaining information and details of an individual log if we know less than half of it’s information. The 30 σ i values can also be seen below in descending order. After the results were produced, more **analysis** was done on the logs and columns of the network data that were being produced. The matrices of network data were of size 1000000 by 34 for the first 12 csv files that were exported, and the 13th csv file contained slightly fewer logs of data, meaning there were slightly less than 13 million logs of data to iterate through. The 13 unique network data csv files were read and concatenated into a singular data matrix, with the first four columns being ignored, which was then z-Transformed and processed into an SVD the same way our subset sample of data had been. The z-Transformation process is described later in section 5.4 of this paper.

Show more
91 Read more

Our manager also bears the responsibility of running mathematical **analysis**. It assists with easy experimentation of our algorithm, as explained in chapter 4. It is to be noted that feature generation occurs with Python 2.7 and math **analysis** uses Python 3.6. There are ways to translate Python 2 to Python 3 with libraries such as Futurize [58] but with several moving components in our code base and not anticipating this problem in the beginning of the project we decided to not convert our code. One could imagine, that with time, a manager could be written entirely in one Python version or one that utilizes the something such as the Futurize library. As with many Python projects, we decided to develop from within a virtual environment **using** Python’s VirtualEnv [14]. We took advantage of this decision and have two Python virtual environments, one for Python 2.7 and one with Python 3.6. All dependencies are clearly defined within our code repository readme [59]. Our manager begins feature generation code in the Python 2.7 virtual environment and a bash script is started once feature generation is complete. This bash script deactivates the Python 2.7 virtual environment, activates the Python 3.6 environment to run our math algorithm on the feature matrix csv file. This process closes the gap between computer science and mathematical **analysis**. What follows is the visualization step of the results for a user, this is explained in more detail in Chapter 6. A user can use this manager and examine any number of ranges for any log. Another advantage of having a manager script is that adding or removing specific feature sets become infinitely easier.

Show more
76 Read more

Further, the non-convex models are run 10 times (to determine a good local minimum) for every tuple of the parameter range and the minimum error is reported. The k-means clustering proce[r]

14 Read more

Abstract. **Principal** **Component** **Analysis** (PCA) is the problem of finding a low- rank approximation to a matrix. It is a central problem in statistics, but it is sensitive to sparse errors with large magnitudes. **Robust** PCA addresses this problem by decomposing a matrix into the sum of a low-rank matrix and a sparse matrix, thereby separating out the sparse errors. This paper provides a background in **robust** PCA and investigates the conditions under which an optimization problem, **Principal** **Component** Pursuit (PCP), solves the **robust** PCA problem. Before introducing **robust** PCA, we discuss a related problem, sparse signal recovery (SSR), the problem of finding the sparsest solution to an underdetermined system of linear equations. The concepts used to solve SSR are analogous to the concepts used to solve **robust** PCA, so presenting the SSR problem gives insight into **robust** PCA. After analyzing **robust** PCA, we present the results of numerical experiments that test whether PCP can solve the **robust** PCA problem even if previously proven sufficient conditions are violated.

Show more
27 Read more

Different approaches for computing sparse loadings matrices have been proposed in the litera- ture. Vines (2000) and Anaya-Izquierdo et al. (2011) use a restriction on the loadings to integers. Jolliffe et al. (2003) introduced the SCoTLASS, related to the Lasso estimator (Tibshirani, 1996). Here the **principal** components maximize the variance but under an upper bound on the sum of the absolute values of the loadings. It is shown that such an approach yields better results than a two-step procedure, where after a standard PCA rotation techniques are performed (Jollife, 1995). Zou et al. (2006) use the elastic net to obtain a version of sparse PCA. Modifications and improve- ments of this method are made in Leng and Wang (2009). Finally, Guo et al. (2011) introduce a fusion penalty to capture block structures within the variables. All these methods, however, are not **robust** to outliers.

Show more
25 Read more

Abstract: Two **robust** approaches to **principal** **component** **analysis** and factor **analysis** are presented. The different methods are compared, and properties are discussed. As an application we use a large geochemical data set which was analyzed in detail by univariate (geo-)statistical methods. We explain the advantages of applying **robust** multivariate methods.

Nuclear norm based PCA attempts to recover clean data with low-rank structure from the corrupted data so that the robustness of the PCA series mentioned above can
20
be enhanced. Due to its great potential for being used in the real-life applications such as image denosing [8, 9],video surveillance [10] and image clustering [11, 12], this research has attracted a lot of attention from both academia and industry. Candes et al. [8] demonstrated that PCA can be made **robust** against outliers by exactly recover- ing the low-rank representation even from grossly corrupted data via solving a simple

Show more
23 Read more

Recent attempts **using** parameterized feature models and multi-scale matching look more promising, but still face severe problems before they are generally applicable. Current connectionist approach tend to hide much of the pertinent information in the weights that makes it difficult to modify and evaluate parts of the approach.
The eigen face approach to face **recognition** was motivated by information theory, leading to the idea of basing face **recognition** on a small set of features that best approximate the set of known face images, without requiring that they correspond to our intuitive notions of facial parts and features.

Show more
11 Read more

Over the last ten years or so, face **recognition** has become a popular area of research in computer vision and one of the most successful applications of image **analysis** and understanding. Because of the nature of the problem, not only computer science researchers are interested in it, but neuroscientists and psychologists also. It is the general opinion that advances in computer vision research will provide useful insights to neuroscientists and psychologists into how human brain works, and vice versa [1].The goal is to implement the system (model) for a particular face and distinguish it from a large number of stored faces with some real-time variations as well. It gives us efficient way to find the lower dimensional space. Further this algorithm can be extended to recognize the gender of a person or to interpret the facial expression of a person. **Recognition** could be carried out under widely varying conditions like frontal view, a 45° view, scaled frontal view, subjects with spectacles etc are tried, while the training data set covers limited views. The algorithm models the real-time varying lighting conditions as well. But this is out of scope of the current implementation .

Show more
Department of Computer Science 1,2 Stella Maris College 1,2
ramyasrinivasagam@gmail.com 1 , birunda78@gmail.com 2
Abstract: An image is a depiction that can be generated, duplicated and saved in any electronic form. Image processing is a method to convert an image into digital form and perform some operations on it, in order to get an enhanced image or to extract some useful information from it. Image processing system includes treating images as two dimensional signals while applying already set signal processing methods to them. Human face **recognition** plays an important role in all sectors. Many fraudulent and crime activities are detected **using** the human face. Many methods are used for maintaining the security like credit cards, pin numbers, smart cards etc. But sometimes it fails.

Show more
ABSTRACT
Nowadays actions are increasingly being handled in electronic ways, instead of physical interaction. From earlier times biometrics is used in the authentication of a person. It recognizes a person by **using** a human trait associated with it like eyes (by calculating the distance between the eyes) and **using** hand gestures, fingerprint detection, face detection etc. Advantages of **using** these traits for identification are that they uniquely identify a person and cannot be forgotten or lost. These are unique features of a human being which are being used widely to make the human life simpler. Hand gesture **recognition** system is a powerful tool that supports efficient interaction between the user and the computer. The main moto of hand gesture **recognition** research is to create a system which can recognise specific hand gestures and use them to convey useful information for device control. This paper presents an experimental study over the feasibility of **principal** **component** **analysis** in hand gesture **recognition** system. PCA is a powerful tool for analyzing data. The primary goal of PCA is dimensionality reduction. Frames are extracted from the Sheffield KInect Gesture (SKIG) dataset. The implementation is done by creating a training set and then training the recognizer. It uses Eigen space by processing the eigenvalues and eigenvectors of the images in training set.

Show more
The approach was used to build a face detection system that is around 15 times faster than any previous approach.
Taranpreet Singh Ruprah et al. 2010 [2] presented a face **recognition** system **using** PCA with neural networks for face verification and face **recognition** **using** photometric normalization for comparison. In this paper a feature was extracted **using** **principal** **component** **analysis** and was then classified by creation of back propagation neural network. She ran her algorithm for face **recognition** application **using** **principal** **component** **analysis**, neural network and the performance by **using** the photometric normalization technique: Histogram Equalization was calculated and was compared with Euclidean Distance and Normalized correlation classifiers. The system produces good results for face verification and **recognition**. The experimental results showed the N.N. Euclidean distance rules **using** PCA for overall performance for verification. However, for **recognition**, E.D. classifier gives the highest accuracy **using** the original face image. Thus, applying histogram equalization techniques on the face image did not give much impact to the performance of the system if conducted under controlled environment.

Show more
modifies the **analysis** by replacing Σ by a **robust** estimator, M-estimator [11], which gives full weight to observations assumed to come from the main body of the data and downweighs the influence of observations with unduly large Ma- halanobis distances. An observation is weighted according to its total distance d m , calculated **using** Mahalanobis distance,from the **robust** estimate of loca- tion. The components of this distance along each eigenvector may lead to an incorrectly downweighted observation which has a large **component** along one direction and small components along others. So, M-estimators for mean and variance are used to find the **principal** components with minimum influence of atypical observations.

Show more
100 Read more

Data was collected for targets performing each of the following classes of motion: Human Walking (slow); Human Walking (medium); Human Walking (fast); Horse With Rider Walking (medium); Horse With Rider Walking (fast); Horse and Human Both Present.
The dataset consists of 28 observations for each class of motion where the duration of each observation is 0.5s. The exception to this is class 6 for which there are 112 observations. Classification of extracted feature vectors was performed **using** an SVM classifier with a radial basis function (RBF) kernel, employing a cross-validation grid search for selection of cost function and kernel parameters. The one-against-all approach was used to perform multi-class classification between the six classes. When classifying the test dataset, a system of Monte Carlo testing was applied whereby classification was performed repeatedly **using** randomly generated permutations of training and test for each repetition. For each test 50 repetitions were carried out, and a ratio of roughly 70%

Show more
Department of Information Technology, Kalyani Government Engineering College, West Bengal, India 1 Department of Information Technology, Kalyani Government Engineering College, West Bengal, India 2
ABSTRACT: Digital Image Processing has contributed a lot in research domain. In this paper we mainly concentrate on **recognition** of English numeric characters. At first we have collected lots of sample of numeric character images of 0-9 and then resize them into a fixed dimension. Then we have performed a thresholding operation based on similarity of pixel value in gray scale to get the binary images. After getting those binary images feature extraction have been performed to collect different property like area, eccentricity, Euler no, major & minor axis length etc. Then feature reduction is performed. After that well-known **Principal** **Component** **Analysis** (PCA) has been run over selected data.

Show more
{v.hovhannisyan13, i.panagakis, s.zafeiriou, p.parpas}@imperial.ac.uk
Abstract
**Robust** **principal** **component** **analysis** (RPCA) is cur- rently the method of choice for recovering a low-rank ma- trix from sparse corruptions that are of unknown value and support by decomposing the observation matrix into low- rank and sparse matrices. RPCA has many applications including background subtraction, learning of **robust** sub- spaces from visual data, etc. Nevertheless, the application of SVD in each iteration of optimisation methods renders the application of RPCA challenging in cases when data is large. In this paper, we propose the first, to the best of our knowledge, multilevel approach for solving convex and non-convex RPCA models. The basic idea is to construct lower dimensional models and perform SVD on them in- stead of the original high dimensional problem. We show that the proposed approach gives a good approximate solu- tion to the original problem for both convex and non-convex formulations, while being many times faster than original RPCA methods in several real world datasets.

Show more
However, this convex relaxation may make the solutions deviate from the original ones.
To this end, the Generalised Scalable **Robust** PCA (GSRPCA) is proposed, by reformu- lating the **robust** PCA problem **using** the Schatten p-norm and the ` q -norm subject to orthonormality constraints, resulting in a better non-convex approximation of the origi- nal sparsity regularised rank minimisation problem. It is worth noting that the common **robust** PCA variants are special cases of the GSRPCA when p = q = 1 and by properly choosing the upper bound of the number of the **principal** components. An efficient al- gorithm for the GSRPCA is developed. The performance of the GSRPCA is assessed by conducting experiments on both synthetic and real data. The experimental results in- dicate that the GSRPCA outperforms the common state-of-the-art **robust** PCA methods without introducing much extra computational cost.

Show more
11 Read more