Abstract: Orthogonal Frequency Division Multiplexing (OFDM) while being an efficient scheme for high data rate wireless communications has drawbacks such as higher Peak-to-Average Power Ratio (PAPR). To reduce PAPR, use of multiple signal representationtechnique such as Partial Transmit Sequence (PTS) is one of the favored techniques. However, the use of conventional PTS technique need excessive number of complex calculations in order to search for all permissible combinations of phase sequences causing steep increase in complexity in terms of complex computations. Paper aims to reduce the cumbersome process of phase selection by making use of the similarity of the phase vectors. The phase vectors are obtained sequentially and thus minimize the number of changes from one phase vector to another. Theoretical analysis shows that computational complexity is significantly reduced with the help of this proposed novel technique. We have also demonstrated that PAPR values are similar i.e. PAPR reduction capability remains similar but at reduced complexity.
Clustering categorical data is relatively difficult than clustering numeric data. In numeric data the inherent geometric properties can be used in defining distance functions between data points. In case of categorical data, a distance or dissimilarity function can’t be defined directly. An extension of the classical k-means algorithm for categorical data has been done in , where a method of representing a cluster using representatives which are very much similar to means used in k-means algorithm has been proposed together with a new distance measure. In this paper we first propose an alternative representation of categorical data as numeric data making it easier to handle. This technique provides a uniform representation for data points and the cluster representatives. The similarity measure proposed in  has been used in this new setting. The algorithm used in  has been implemented and tested with this new setting and the results obtained have been reported. Experiments were conducted on two real life data sets, namely, soybean diseases, and mushroom data sets. The clusters obtained in soybean dataset are pure clusters with hundred percent accuracy. In the other dataset also it gives relatively higher accuracy with small errors.
In paper  influences from rule mining methods to extract and represent the temporal relation of prototypical patterns in clinical data streams. The methodology is completely data-driven, where the temporal rules are mined from physiological time series, for example, respiration rate, blood weight, and heart rate. To accept the rules, a novel similarity technique is presented, that compares the similarity between rule sets. An additional part of the proposed approach has been to use natural language generation techniques to represent the temporal relations between patterns.
utilized to separate the reflected region from the image. The authenticity is performed by comparing the reflection point out of the geometrical representationtechnique and collective segmentation. The experimentation of the proposed reflection inconsistency based forgery detection scheme is performed over different manually manipulated image with reflection inconsistency. The experimentation ensued with encouraging results for the exposure of image manipulation with reflection inconsistency.
Abstract. The semantic web offers a great deal of deviation from the way in which the current search engines which are based on the traditional information search theory work. Semantic search is carried out by ontology based intelligent information retrieval. So a good semantic search needs a good ontology. The ontology developers need more familiar notations and tools for a uniform representation of ontologies. UML being a standard modeling language in software engineering, it is better supported in terms of expertise and the tools as compared to the upcoming semantic web languages. This work proposes a representationtechnique which is based on software engineering standards namely UML for modeling domain knowledge of the Semantic Web. The ontology for Company Domain has been presented using this software engineering modeling technique. The UML tool like Rational Rose tool can be used to provide support for modeling complex ontologies of the given domain.
Abstract: This paper discusses a technique designed to represent the spatial structure of ion trajectories by transforming the vector series from two-dimensional to three-dimensional space. There are four techniques available to represent spatial structures, such as orientation, direction and velocity. These techniques are iconic representation, the navigation function, the halo function and the transparency scheme. Iconic representation is a technique used to transform data sets into three-dimensional iconic shapes where each data set is transformed into cylindrical and conical shapes; these shapes are then used to represent ion trajectories. Additionally, to improve representation the navigation function, halo function and transparency scheme have been proposed. The navigation function is a technique for navigating in three-dimensional space around the iconic representation. The halo function is a technique used to enhance the representation of iconic shapes by adding a subtle halo around an icon and the transparency scheme is a technique used to represent a zoom-in effect during navigation around an iconic representation in order to visualize the cone located inside the cylinder. The result shows an iconic representationtechnique have been developed to transform a vector series from two-dimensional line graph in order to visualize the orientation, direction and magnitude of ion trajectories in three-dimensional space.
Representations of posets (partially ordered sets) were introduced in . In [7, 8] a criterion was given for a poset to be representation finite, i.e. having only finitely many indecomposable representations (up to isomor- phism), and all indecomposable representations of posets of finite type were described. Further, in  Coxeter transformations were constructed for representations of posets, following the framework of . It implied another criterion for a poset to be representation finite, not involving ex- plicit calculations, but using the Tits quadratic form, also analogous to that of . Note that this paper did not give all reflections, corresponding to the Tits form. They were constructed in , using a generalization of representations of posets, namely, representations of bisected posets.
219 generate a synthetic trace from it. A guided DSE approach based on packing has been proposed by Ristau et al. , in which the performance of a single task on a specific processor has been estimated and used in the guidance step for mapping the task on to a processing element. The methodology described by Genbrugge et al. studies statistical simulation as a fast simulation technique for Chip Multiprocessors (CMP) design space exploration which enhances the typical statistical simulation by modelling the memory address stream behaviour in a more micro- architectural independent way and modelling programs time-varying execution behaviour. Scenario – based DSE has been proposed by Van Stralen and Pimental  for MPSoCs by introducing the concept of workload scenarios, for capturing dynamic behaviour both within and between applications for guiding the DSE process. A co-evolutionary genetic algorithm has been implemented for performing the DSE. Feature selection algorithm  has been integrated as an extension to the scenario based DSE, to identify the different multi application workload scenario subset. The representative subsets obtained from the feature selection scheme is utilized in  for predicting the fitness of the scenario subsets, in order to improve the efficiency of DSE process and the quality of mapping. Stochastic, deterministic and hybrid prediction techniques have been implemented and their performance with respect to multi workload scenarios has been compared.
Index models (Sharpe 1963), often implemented via principal components analysis (Connor and Korajczyk 1993), allow for a reduction of the dimensionality of stock return correlations. Unfortunately this technique, as applied to stock returns, yields estimated principal components are not easily interpreted in terms of their economic content. The exception to this is the first principal component, which is typically very close to the return to the aggregate market portfolio. Market betas are then roughly equivalent to loadings on this first principal component, and are a key statistic for capturing risk in financial markets. One potential concern is that our method is simply sorting industries based on their market beta, and that calculating rolling market betas would be enough to uncover the patterns we find. Another concern is that the increased flexibility over the latter half of the sample is simply driven by an increase in the overall correlation of industry returns, which is a proxy for the variation explained by the first principal component.
In most of the cases involved , it is not at all possible to determine a perfect closed form expressions to the conditional expectation E x y is required to obtain the MMSE technique . In similar situations one possibilities is to drop the total optimal requirements and find a technique to minimize the MSE within a prescribed estimator class, as in the class of classified linear estimators technique The LMMSE is the linear minimum mean square error performs better than LS the least square technique estimator but at the cost of the complexity of the building the technique because it depends on the channel and the noise statistics . The linear MMSE is the minimum mean square estimator technique is the estimator achieving the very minimum MSE that is the mean square error among all the redefined estimators of such format. Such type of techniques only depends on the first two defined terms of the PDF. So that even if it is easy to greatly take that input x and output y are together Gaussian in nature, as long as the assumed distribution has properly designed first two terms, it is not mandatory to make such suppositions.
The rest of the paper is structured in the following manner. Section 2 presents the current methods and approach which are used for knowledge management and the collection of information along with problems faced during the process. Section 3 provides the proposed technique for knowledge representation and storage of unstructured data using hypergraphs and NoSQL. Section 4 explains the advantages of the proposed technique over current methods which are followed. Section 5 descsribes the conclusion and future directioins for the work.
An basic premise for intrusion detection is that when audit mechanisms are enabled to record system events, distinct evidence of legitimate activities and intrusions will be manifested in the audit data . Because of the large amount of audit records and the variety of system features, efficient and intelligent data analysis tools are required to discover the behavior of system activities. KDD99Cup  dataset and the Defense Advanced Research Projects Agency (DARPA) datasets provided by MIT Lincoln Laboratory are widely used as training and testing datasets for the evaluation of IDSs , An evolutionary neural network is introduced in and networks for each specific system- call-level audit data (e.g., ps, su, ping, etc.) are evolved. Parikh and Chen discussed a classification system using several sets of neural networks for specific classes and also proposed a technique of cost minimization in the intrusion-detection problems.
The bit and its difference representation bit in video signal or an image is explained in the adaptive power allocation & channel coding optimization technique An offline iterative algorithm is proposed for the transmission of individual bits by optimum combination of power & coding .The optimum combination in this paper to reduce the Mean Square Error (MSE) This optimum combination would provide a better quality of reconstructed image in the simulation result .The bit of significant importance were coded & allocated and transmitted while the bit of less significant will not be sent with coding & allocation By doing this the average per bit. Energy level is Maintained at the same level from the proposed method of combination approach would able to achieve a gain about 3db in this combination approach when reducing the peak to average power ratio the power allocation is outer forms (Mohamed El-Tarhuni, 2010).
In this section on the collected data a simple fuzzy matrix model with an effective technique is discussed. At first the collected raw data is transformed into a fuzzy matrix model. In the first stage, the raw data given in the matrix representation is converted into a time dependent matrix .Secondly, after obtaining the time dependent matrix, using the techniques of average and standard deviation the time dependent data matrix is converted into an Average Time Dependent Data Matrix (ATD Matrix) by taking districts along the rows and the years along the columns. ATD Matrix is obtained by dividing each entry of the raw data matrix by the number of years. This matrix represents a data which is totally uniform. At the third stage, using the average μj of each jth column and σj the Standard Deviation of each jth column, a parameter α has been chosen from the interval [0, 1] and then the Refined Time Dependent Matrix (RTD Matrix) aij is formed using the formula:
Principle 1 is quite straightforward and requires no further explanation. Principle 2 requires that one should being to bear a model of how the knowledge is structured early on in the process, and use it to interpret subsequent data. Principle 3 means that one should use an appropriate intermediate level knowledge representation device, and try to characterise knowledge in terms of its use and functioning. Principle 4 is a reminder that a complete analysis includes an understanding of how the system is to work - e.g. who will use it, and in what situation. One cannot gain a full understanding of the problem simply by trying to map out an expert's knowledge without regard to how it will be used. Principle 5 emphasises the fact that there is a wide variety of related topics within a domain. This means that construction of a model should be 'breadth-first', embodying all aspects at once, rather than attempting fully to represent one sub-part after another. Principles 6 and 7 are once again straightforward. Like many of the best recommendations the utility of these statements is most apparent when they are not adhered to.
Knowledge representation languages can be used to formalise a domain expert’s knowledge and reduce the issues of subjectivity, automation and transferability. A knowledge representation approach can employ logic-based formalism or non-logic-based representation. In a logic-based approach, the representation language is usually a variant of first-order predicate calculus and reasoning is based on formal verification of logical consequences , while, in a non-logical approach, the knowledge is represented using specialised data structures and reasoning is performed by applying specialised procedures on the structures . A semantic network represents a non-logical approach with a specialised network structure in which a graphical network of nodes and arcs is used for knowledge representation. The graphical representation shows the semantic relations between concepts, which can be used to create and share the knowledge of thematic experts . Ontology, as a logic-based approach, has well-defined and formal semantics to represent knowledge.
These results replicate the characteristic distortions of hand representation we recently reported (Longo & Haggard, 2010). These distortions are substantially reduced, though qualitatively similar, on the palmar compared to the dorsal hand surface. We experience our body as a coherent volumetric whole, subject to the same physical and geometric laws as other rigid bodies. That different magnitudes of distortion are found on the two sides of a single body part, however, suggests that the body model used for position sense is not a fully unified
Apel et al.  presented a novel language for FOP in C++ namely FeatureC++. They also mentioned few prob- lems of FOP languages. Apel et al.  demonstrated FeatureC++ along with its adaptation to Aspect-Oriented Programming (AOP) concepts. They discussed the use of FeatureC++ in solving various problems related to incre- mental software development using AOP concepts. They also discussed the weaknesses of FOP for modularization of crosscutting concerns. Apel et al.  discussed the lim- itations of crosscutting modularity and the missing support of C++. They also focused on solutions for ease evolv- ability of software. Batory  presented basic concepts of FOP and a subset of the tools of the Algebraic Hier- archical Equations for Application Design (AHEAD) tool suite. Apel et al.  presented an overview of feature- oriented software development (FOSD) process. They had identified various key issues in different phases of FOSD. Thum et al.  developed an open source framework for FOSD namely FeatureIDE that supported all phases of FOSD along with support for feature-oriented pro- gramming languages, and delta-oriented programming lan- guages, aspect-oriented programming languages. Pereira et al.  discussed the findings of SPL management tools from a Systematic Literature Review (SLR). These works [7, 5, 3, 4, 2, 20, 6] discussed only the programming and development aspects of FOP and did not consider the slic- ing aspects. We have presented a technique for dynamic slicing of feature-oriented programs with aspect-oriented extensions using Jak as the FOP language.
Modified Vector Space Representation  extends the previous representation (which considers unique terms from training data only) by incorporating mechanism to handle any unforeseen terms during testing. We deliberately add a system call number (we refer it as unknown (unk)) in list, whose value is higher than any system call number present in system call list for OS. We form terms of length l comprising this unknown system call number including one term having all unknown system call number. Let E be the set of unknown terms comprising unk system call number. unk is a number deliberately added in the list of system call numbers to map terms, which are not seen during training but found in testing. Hence, the new feature set can be defined as U new l = U train l E , where U new l is set comprises of all unique terms of length l appearing in train- ing data U train l and set of terms having unk system call number E. Considering, N train as number of system call trace sequences in training, memory complexity of this representation would be O N ( train × U new l ) , where