The method of converting a rhotrix to a special matrix called “coupled matrix” was suggested by [9]. This idea was used to solve systems of n n and n 1 n 1 matrix problems simultaneously. The concept of vectors and rhotrix vector spaces and their properties were in- troduced by [3] and [4] respectively. To the best of our knowledge, the concept of rank and **linear** **transformation** of rhotrix has not been studied. In this paper, we consider the rank of a rhotrix and characterize its properties. We also extend the idea to suggest the necessary and suffi- cient condition for representing rhotrix **linear** transforma- tion.

Show more
Most of the existing work in metric learning has been done in the Mahalanobis distance (or metric) learning paradigm, which has been found to be a sufficiently powerful class of metrics for a variety of data. In one of the earliest papers on metric learning, Xing et al. (2002) propose a semidefinite programming formulation under similarity and dissimilarity constraints for learning a Mahalanobis distance, but the resulting formulation is slow to optimize and has been outperformed by more recent methods. Weinberger et al. (2005) formulate the metric learning problem in a large margin setting, with a focus on k-NN classification. They also formulate the problem as a semidefinite programming problem and consequently solve it using a method that combines sub-gradient descent and alternating projections. Goldberger et al. (2004) proceed to learn a **linear** **transformation** in the fully supervised setting. Their formulation seeks to ‘collapse classes’ by constraining within-class distances to be zero while maximizing between-class distances. While each of these algorithms was shown to yield improved classification performance over the baseline metrics, their constraints do not generalize outside of their particular problem domains; in contrast, our approach allows arbitrary **linear** constraints on the Mahalanobis matrix. Furthermore, these algorithms all require eigenvalue decompositions or semi-definite programming, which is at least cubic in the dimensionality of the data.

Show more
29 Read more

Obtained results show that, Gauss curvature of the surface doesn’t change when special **linear** **transformation** in curvilinear coordinates is done. **Linear** **transformation** in curvilinear coordinates defines deformation bounded by the surface in Euclidean space. So, in this deformation Gauss curvature of the surface remains unchanged.

10 Read more

In this paper, we will extend their redundancy concept to estimation of an arbitrary **linear** **transformation** of an original parameter vector and derive the necessary and sufficient condition for an extra set of moment conditions to be redundant, given an initial set of moment conditions, for the efficient estimation of the **linear** transforma- tion of parameters. More specifically, the current paper makes three new contributions. Firstly, we extend the moment redundancy concept of [1] to estimation of an arbitrary **linear** **transformation** of original parameters. Secondly, the redundancy condition derived in the current paper unifies the full and partial redundancy condi- tions of [1]. As a result, our redundancy condition given in the theorem of Section 2 includes as two special cas- es the full and partial redundancy conditions of [1]. Lastly, we use a much simpler approach based on matrix ranks to deriving our main results instead of using “brute-force” matrix algebra, as adopted by [1] and [2].

Show more
Abstract- Every of us will have a large set of stored images such as scanned copies, multimedia files. Image retrieval is a process of finding similar images to that of query image or to find out to which database the query image belongs. Here we are using two **linear** **transformation** techniques Gabor-Walsh Wavelet pyramid technique and Curvelet transform 7method Feature vectors of each of the database images are extracted by applying these techniques, then by calculating Euclidian distance we find that to which database the query image belongs. The comparative analysis is done based on the FAR, FRR & TSR parameters. Index Terms- CBIR (content based image retrieval)FAR (False Acceptance Rate), FRR (False Rejection Rate), GWWP (Gabor Walsh & Wavelet Pyramid), TSR (True Success Rate).

Show more
The uniqueness of this study lies in its proposition of a smooth **linear** trans- formation function which is found to be yielding better result over that of the traditional binary classification of beta for capturing the asymmetry. The smooth transition measure would give less weight to unsystematic and noisy movements and thus, expected to capture long-run departures in a better way. This indicator not only captured the direction of the market return but also contained a meas- ure to capture the magnitude of the market return. It was observed from the study that the proposed method of market classification could capture the asymmetric characteristics of beta in a better way. In the study with 777 stocks, the asymmetric influence was significant in the case of 112 stocks, whereas the number was much lower when binary up and down classification was used in accordance with conventional procedures. Thus, our proposition of a smooth **linear** **transformation** function is proved to be yielding a superior result for cap- turing the asymmetry. Whether these asymmetries can be used for investment decisions, need further exploration.

Show more
14 Read more

The sophistication of the models and the complexity of the ﬁnancial products also imply that only in rare cases there exists tractable pricing formulaes for these prod- ucts. In pricing most exotic derivative securities, we typ- ically resort to numerical methods such as the binomial models, ﬁnite diﬀerence methods, Monte Carlo (MC), or quasi-Monte Carlo (QMC) methods. In the past decade, QMC has become a popular tool in computational ﬁ- nance. Early ﬁnance applications of QMC mainly fo- cus on the Black-Scholes type models; see [14]. More recently, this method has been extended to other more exotic models, particularly the L´ evy models (see [8], [19], [2], [16], [13]). Many numerical studies seem to suggest that the success of QMC is intricately related to the no- tion of eﬀective dimension. Dimension reduction tech- niques such as the Brownian bridge construction ([17] and [4]), the principal component construction [1], and the **linear** **transformation** (LT) method [12] have been proposed to further enhance QMC.

Show more
Now depends on the **transformation** technique we need to apply such as **linear** or non-**linear** the image is enhanced. In **linear** **transformation** a slope of **linear** value is selected and all the pixels whose intensity value is close to this slope is considered otherwise the pixels whose intensity is less or greater than this **linear** slope needs to be adjusted. The proposed work consists of histogram equalizations, HDR images, **linear** **transformation** and kernel padding. They are described in the following.

The **linear** **transformation** is the most simple and the most basic **transformation**. Such as, the **linear** function is the most simple and the most basic function. The **linear** **transformation** is a main research object of **linear** algebra. This text uses a series of related knowledge of **linear** **transformation** thought in **linear** algebra, makes study and analysis finely to the position and application of the **linear** **transformation** thought in mathematics, and sieves several typical example.

Abstract. **Linear** transformations are applied to the white-box crypto- graphic implementation for the diffusion effect to prevent key-dependent intermediate values from being analyzed. However, it has been shown that there still exists a correlation before and after the **linear** transfor- mation, and thus this is not enough to protect the key against statistical analysis. So far, the Hamming weight of rows in the invertible matrix has been considered the main cause of the key leakage from the **linear** **transformation**. In this study, we present an in-depth analysis of the distribution of intermediate values and the characteristics of block in- vertible binary matrices. Our mathematical analysis and experimental results show that the balanced distribution of the key-dependent inter- mediate value is the main cause of the key leakage.

Show more
16 Read more

522 | P a g e scene atmospheric scattering coefficient and gets the scene albedo [7]. ia, Bolun used Convolutional Neural Networks (CNN) relying on feature extraction to estimate a medium transmission map and offering a nonlinear function called Bilateral Rectified **Linear** Unit (BReLU) to enhance the quality of a de-haze image. Then making connections between the proposed DehazeNet and existing model [8].

14 Read more

The objective of this paper is to show that the log-convexity is preserved under the p, q-binomial **transformation** using a theorem of Liu and Wang [11, Theorem 4.8], who established the connection between **linear** transformations preserving the log-convexity and the x-log-convexity. As an applica- tion, we give the log-convexity of the p-Galois numbers.

SUMMARY & CONCLUSION In the present paper, principal components analysis is carried out on the data collected from survey of sample individual investors, to extract the factors influenci[r]

31 Read more

Negative of a **Linear** **Transformation** :—Let E and E be any two **linear** spaces (over the field K).Let T:E→E be a **linear** **transformation** from E into E .Then the negative of T denoted by – T is defined by (-T)(u) = -[T(u)] for every u ∊ E and T(u) ∊ E . Since, T(u) ∊ E ⇒ -T(u) ∊ E .Thus, –T is also a function from E into E . Now, let α,β ∊ K and u,v ∊ E.

These various methods have been investigated for various purposes. However, there has been no unification of these various methods. The purpose of the present paper is to provide a unified description of these methods in terms of a **linear** **transformation** for position and momentum by referring to the system of a generalized oscillator. Since we are concerned with a **linear** canonical **transformation**, the classical and quantum correspondence is one-to-one. We also show that the invariant operator can be consi- dered as part of a **linear** canonical **transformation**.

Show more
14 Read more

The Discrete Wavelet Transform (DWT) is a **linear** **transformation** that operates on a data vector whose length is an integer power of two, transforming it into a numerically different vector of the same length. In this normal Fourier transform is performed as it captures both frequency and location information (location in time). Discrete Wavelet Transform (DWT) derived features used for digital image texture analysis. Wavelets appear to be a suitable tool for this task, because they allow analysis of images at various levels of resolution. The proposed features have been tasted on images from standard Brodatz catalogue. The discrete wavelet transform is a **linear** **transformation** that operates on a data vector whose length is an integer power of two, transforming it into a numerically different vector of the same length. It is a tool separates data into different frequency components, and then studies each component with resolution matched to its scale. The main feature of DWT is multi scale representation of function. By using the wavelet, given function can be analyzed at various levels of resolution. Signal transmission is based on transmission of a series of numbers. The series representation of a function is important in all types of signal transmission. The wavelet representation of a function is a new technique. Wavelet transform of a function is the improved version of Fourier transform. Fourier transform is a powerful tool for analysing the components of a stationary signal. But it is failed for analysing the non-stationary signal where as wavelet transform allows the components of a non-stationary signal to be analysed.

Show more
11 Read more

T (X, P ) is a subsemigroup of T (X) and if P = {X }, T (X, P ) = T (X ). Our aim in this paper is to give necessary and sufficient conditions for elements in T (X, P ) to be left or right magnifying. Moreover, we apply those conditions to give necessary and sufficient conditions for elements in some generalized **linear** **transformation** semigroups.

Given the high approximation capacity of the modern deep neural networks, it is natural to replace the **linear** struc- tures in the transition and emission models of (Doretto et al. 2003) by the neural networks. This leads to the follow- ing dynamic generator model that has the following two components. (1) The emission model, which is a generator network that maps the d-dimensional state vector to the D- dimensional image via a top-down deconvolution network. (2) The transition model, where the state vector of the next frame is obtained by a non-**linear** **transformation** of the state vector of the current frame as well as an independent Gaus- sian white noise vector that provides randomness in the tran- sition. The non-**linear** **transformation** can be parametrized by a feedforward neural network or multi-layer perceptron. In this model, the latent random vectors that generate the ob- served data are the independent Gaussian noise vectors, also called innovation vectors in (Doretto et al. 2003). The state vectors and the images can be deterministically computed from these noise vectors.

Show more
10 Read more

Diophantine Equation given by 4 represents a cone and is analyzed for its non-zero distinct integer solutions. A few interesting relations among the solutions and special polygonal and pyramided numbers are presented. On introducing the **linear** **transformation** x=u+v, y=u-v and employing the method of factorization, different patterns of non-zero distinct integer solutions to the above equation are obtained.