parahermitian matrices

Top PDF parahermitian matrices:

Inversion of Parahermitian matrices

Inversion of Parahermitian matrices

Parahermitian matrices arise in broadband multiple-input multiple- output (MIMO) systems or array processing, and require inversion in some instances. In this paper, we apply a polynomial eigenvalue decomposition obtained by the sequential best rotation algorithm to decompose a parahermitian matrix into a product of two parau- nitary, i.e. lossless and easily invertible matrices, and a diagonal polynomial matrix. The inversion of the overall parahermitian ma- trix therefore reduces to the inversion of auto-correlation sequences in this diagonal matrix. We investigate a number of different ap- proaches to obtain this inversion, and and assessment of the numer- ical stability and complexity of the inversion process.

6 Read more

Maximum energy sequential matrix diagonalisation for parahermitian matrices

Maximum energy sequential matrix diagonalisation for parahermitian matrices

Based on the ensemble of parahermitian matrices S(z) defined at the beginning of Sec. IV, the diagonalisation performance for R(z) = S(z) with M = L = 5 over a number of iterations i is provided in Fig. 7. The graph shows the remaining off-diagonal energy, normalised by the total energy in R(z) which is invariance under paraunitary operations. With the maximum energy SMD too complex to obtain ensemble averaged results, only the SMD algorithms reviewed in Sec. II and SBR2 [2] are shown. Since the multiple-shift versions perform within approx. 10% of the maximum transferable off-diagonal energy per iteration, these limited search-strategy algorithm provide an excellent trade- off between energy transfer and implementation complexity.

5 Read more

Divide-and-conquer sequential matrix diagonalisation for parahermitian matrices

Divide-and-conquer sequential matrix diagonalisation for parahermitian matrices

When designing PEVD implementations for real applica- tions, the potential for the proposed techniques to increase diagonalisation performance while reducing complexity re- quirements offers benefits. A further advantage of the DC- SMD algorithm is its ability to produce multiple independent parahermitian matrices, which may be processed in parallel. Simulation results demonstrate that DC-SMD outperforms

5 Read more

Restricted update sequential matrix diagonalisation for parahermitian matrices

Restricted update sequential matrix diagonalisation for parahermitian matrices

Abstract —A number of algorithms capable of iteratively cal- culating a polynomial matrix eigenvalue decomposition (PEVD) have been introduced. The PEVD is an extension of the ordinary EVD to polynomial matrices and will diagonalise a parahermitian matrix using paraunitary operations. This paper introduces a novel restricted update approach for the sequential matrix diag- onalisation (SMD) PEVD algorithm, which can be implemented with minimal impact on algorithm accuracy and convergence. We demonstrate that by using the proposed restricted update SMD (RU-SMD) algorithm instead of SMD, PEVD complexity and execution time can be significantly reduced. This reduction impacts on a number of broadband multichannel problems.

5 Read more

Sequential matrix diagonalization algorithms for polynomial EVD of parahermitian matrices

Sequential matrix diagonalization algorithms for polynomial EVD of parahermitian matrices

Abstract —For parahermitian polynomial matrices, which can be used, for example, to characterise space-time covariance in broadband array processing, the conventional eigenvalue decomposition (EVD) can be generalised to a polynomial matrix EVD (PEVD). In this paper, a new iterative PEVD algorithm based on sequential matrix diagonalisation (SMD) is introduced. At every step the SMD algorithm shifts the dominant column or row of the polynomial matrix to the zero lag position and eliminates the resulting instantaneous correlation. A proof of convergence is provided, and it is demonstrated that SMD establishes diagonalisation faster and with lower order operations than existing PEVD algorithms.

10 Read more

On the existence and uniqueness of the eigenvalue decomposition of a parahermitian matrix

On the existence and uniqueness of the eigenvalue decomposition of a parahermitian matrix

Abstract—This paper addresses the extension of the factoriza- tion of a Hermitian matrix by an eigenvalue decomposition (EVD) to the case of a parahermitian matrix that is analytic at least on an annulus containing the unit circle. Such parahermitian matrices contain polynomials or rational functions in the complex variable z and arise, e.g., as cross spectral density matrices in broadband ar- ray problems. Specifically, conditions for the existence and unique- ness of eigenvalues and eigenvectors of a parahermitian matrix EVD are given, such that these can be represented by a power or Laurent series that is absolutely convergent, at least on the unit cir- cle, permitting a direct realization in the time domain. Based on an analysis of the unit circle, we prove that eigenvalues exist as unique and convergent but likely infinite-length Laurent series. The eigen- vectors can have an arbitrary phase response and are shown to exist as convergent Laurent series if eigenvalues are selected as analytic functions on the unit circle, and if the phase response is selected such that the eigenvectors are H¨older continuous with α > 1 2 on the unit circle. In the case of a discontinuous phase response or if spectral majorisation is enforced for intersecting eigenvalues, an absolutely convergent Laurent series solution for the eigenvectors of a parahermitian EVD does not exist. We provide some examples, comment on the approximation of a parahermitian matrix EVD by Laurent polynomial factors, and compare our findings to the solutions provided by polynomial matrix EVD algorithms.

14 Read more

Memory and complexity reduction in parahermitian matrix manipulations of PEVD algorithms

Memory and complexity reduction in parahermitian matrix manipulations of PEVD algorithms

When designing PEVD implementations for real applica- tions, the potential for the proposed techniques to reduce complexity and memory requirements therefore offers benefits without deficits w.r.t. important performance metrics such as the diagonalisation of the SMD algorithm. The reduced representation of parahermitian matrices proposed here can be extended to any PEVD algorithm by adapting the shift and rotation operations accordingly.

5 Read more

Impact of source model matrix conditioning on iterative PEVD algorithms

Impact of source model matrix conditioning on iterative PEVD algorithms

Polynomial parahermitian matrices can accurately and ele- gantly capture the space-time covariance in broadband array problems. To factorise such matrices, a number of polyno- mial EVD (PEVD) algorithms have been suggested. At every step, these algorithms move various amounts of off-diagonal energy onto the diagonal, to eventually reach an approximate diagonalisation. In practical experiments, we have found that the relative performance of these algorithms depends quite significantly on the type of parahermitian matrix that is to be factorised. This paper aims to explore this performance space, and to provide some insight into the characteristics of PEVD algorithms.

6 Read more

Uncertainty Weight Generation Approach Based on Uncertainty Comparison Matrices

Uncertainty Weight Generation Approach Based on Uncertainty Comparison Matrices

suitable to regard every non-deterministic phenomenon as random phenomenon, especially when the non-deter- ministic phenomenon is caused by subjective judgment. On the one hand, a lot of surveys showed that the sub- jective uncertainty cannot be modeled by fuzziness. On the other hand, in some works, fuzzy parameters are as- sumed to have known membership functions and credi- bility distributions. However, Atanu Sengupta and Tapan Kumar Pal [8] considered that in real-world to a decision maker (DM) it is difficult to specify the membership function or probability distribution in an ambiguous en- vironment [8]. So some scholars point out the use of in- terval comparison matrices may serve the purpose better in some of the cases. But the method how to derive weights from the inconsistency of the interval compari- son matrices is still subject to further investigation and the interval weights may cause the ranking reversal in the sense of definition of the degree of preference. In a word, the method of interval judgments is likely to be flawed.

9 Read more

Hamacher Sum and Hamacher Product of Fuzzy Matrices

Hamacher Sum and Hamacher Product of Fuzzy Matrices

Abstract. In this paper, we define two new operations called Hamacher sum and Hamacher product of fuzzy matrices and investigate the algebraic properties of fuzzy matrices under these operations as well as the properties of fuzzy matrices in the case where these new operations are combined with the well-known operations ∧ ∨ , , we have proved some new inequalities connected with fuzzy matrices.

8 Read more

Statistically strongly regular matrices and some core theorems

Statistically strongly regular matrices and some core theorems

For any two sequence spaces X and Y , we denote by (X,Y ) a class of matrices A such that Ax ∈ Y for x ∈ X , provided that the series on the right of (1.8) converges for each n . If, in addition, lim Ax = lim x , then we denote such a class by (X,Y ; P ) or (X,Y ) reg .

9 Read more

Selection Criteria of Measurement Matrix for Compressive Sensing Based Medical Image Reconstruction

Selection Criteria of Measurement Matrix for Compressive Sensing Based Medical Image Reconstruction

Abstract—In this article we design a measurement matrix based on compressive sensing for a medical image in order to achieve a low-cost medical image. In Compressive Sensing based reconstruction of an image, number of samples is smaller than conventional Nyquist theorem suggests. In this paper firstly, we apply DWT(Discrete Wavelet Transform)/DCT(Discrete Cosine Transform) transformations on medical image, and then we use Gaussian random matrices, Bernoulli random matrices, Partial orthogonal random matrices, Partial Hadamard matrices, Toeplitz matrices, and QC_LDPC matrices for medical images, respectively. The compressed medical images are reconstructed with different matching pursuit algorithms: OMP (Orthogonal Matching Pursuit), L1 algorithm and GBP (Greedy Basis Pursuit). Using the same amount of measurement, we select the matrix with the best reconstruction as a measurement matrix for medical images. The reconstruction PSNR values, SSIM values, CR values and reconstruction time were used to compare experimental results. The visual quality of reconstructed medical images is of prime importance for medical images. According to the experiment results, the visual quality of reconstructed medical images with OMP matching pursuit and DWT transform is better than other algorithms so that this paper selects Partial Hadamard matrices with DWT transformation and OMP matching pursuit as medical image measurement matrix.

8 Read more

Netted matrices

Netted matrices

We prove that powers of 4-netted matrices (the entries satisfy a four-term re- currence δa i,j = αa i−1,j + βa i−1,j−1 + γa i,j−1 ) preserve the property of netted- ness: the entries of the eth power satisfy δ e a (e) i,j = α e a i−1,j (e) +β e a (e) i−1,j−1 +γ e a (e) i,j−1 , where the coefficients are all instances of the same sequence x e+1 = (β + δ)x e − (βδ+αγ)x e− 1 . Also, we find a matrix Q n (a,b) and a vector v such that Q n (a,b) e · v acts as a shifting on the general second-order recurrence sequence with pa- rameters a, b. The shifting action of Q n (a,b) generalizes the known property

12 Read more

Application of Matrices

Application of Matrices

We have presented a computational approach to arithmetic on abstract matrices. We have defined a simple basis function that allows us to represent every abstract matrix regardless of its structural composition as a sum of region terms. Given this representation we could define matrix addition and multiplication straightforwardly as addition and multiplication of the sums. In fact we could show that presentation enables symbolic computations on abstract matrices that are considered mathematically routine but for which only limited automated support exists. In a next step we therefore intend to implement bespoke algorithms for abstract matrix arithmetic and combine them with our parsing procedure presented in [5]. Moreover, we intend to use our representation as a basis for developing other operations on abstract matrices such as computing Jordan normal forms or determinants. Another advantage of our representation is that the result of an arithmetic operation on two abstract matrices can be examined by systematic arithmetic manipulations and exploitation of the partial order structure of the basis function to yield structural properties of the resulting matrix. This could be further exploited to perform and automate general proofs of closure properties for certain classes of structural matrices

6 Read more

3 Matrices

3 Matrices

The use of matrices to store information is demonstrated by the following two examples. Four exporters A, B, C and D sell televisions (t), CD players (c), refrigerators (r) and washing machines (w). The sales in a particular month can be represented by a 4 × 4 array of numbers. This array of numbers is called a matrix.

24 Read more

3A Operations with matrices 3B Multiplying matrices 3C Powers of a matrix 3D Multiplicative inverse and solving matrix equations 3E The transpose of a matrix 3F Applications of matrices 3G Dominance matrices

3A Operations with matrices 3B Multiplying matrices 3C Powers of a matrix 3D Multiplicative inverse and solving matrix equations 3E The transpose of a matrix 3F Applications of matrices 3G Dominance matrices

As we saw in questions 11 and 12 from exercise 3D, matrices may be used to solve linear simultaneous equations. The pair of equations may be written in the form AX = B where A is the matrix of the coefficients of x and y in the equations, X = and B is the matrix of the numbers on the right-hand side of the simultaneous equations. A is called the coefficient matrix .

42 Read more

Impact of space-time covariance estimation errors on a parahermitian matrix EVD

Impact of space-time covariance estimation errors on a parahermitian matrix EVD

Abstract —This paper studies the impact of estimation errors in the sample space-time covariance matrix on its parahermitian matrix eigenvalue decomposition. We provide theoretical bounds for the perturbation of the ground-truth eigenvalues and of the subspaces of their corresponding eigenvectors. We show that for the eigenvalues, the perturbation depends on the norm of the estimation error in the space-time covariance matrix, while the perturbation of eigenvector subspaces can additionally be influenced by the distance between the eigenvalues. We confirm these theoretical results by simulations.

5 Read more

Maximally smooth Dirichlet interpolation from complete and incomplete sampling on the unit circle

Maximally smooth Dirichlet interpolation from complete and incomplete sampling on the unit circle

[29] S. Weiss, I.K. Proudler, F.K. Coutts, and J. Pestana, “Iterative approximation of analytic eigenvalues of a parahermitian ma- trix EVD,” in IEEE International Conference on Acoustics, Speech and Signal Processing, Brighton, UK, May 2019. [30] G.H. Golub and C.F. Van Loan, Matrix Computations, John

5 Read more

Iterative approximation of analytic eigenvalues of a parahermitian matrix EVD

Iterative approximation of analytic eigenvalues of a parahermitian matrix EVD

A number of recent broadband array problems such as beam- forming [1, 2], angle of arrival estimation [3, 4], blind source separation [5], multichannel coding [6, 7], or MIMO system design [8–11] have been successfully formulated and solved using polynomial matrix algebra. Central to this has been the McWhirter decomposition [12], which factorises a paraher- mitian matrix R(z), i.e. a matrix that is equal to its paraher- mitian conjugate R P (z) = R H (1/z ∗ ) = R(z). Typically a parahermitian matrix arises as a cross-spectral density (CSD) matrix, i.e. the z-transform of a space-time covariance matrix. The factorisation results in

5 Read more

Sample space-time covariance matrix estimation

Sample space-time covariance matrix estimation

A parahermitian matrix — typically a cross-spectral density (CSD) matrix emerging as the z-transform of a space-time covariance matrix — can be decomposed into a product of analytic paraunitary matrices and a diagonalised parahermi- tian matrix [1] with few exceptions [2]. A spectrally ma- jorised, not necessarily analytic version of this factorisation is the McWhirter decomposition [3], which approximates the factorisation by polynomial paraunitary and diagonal para- hermitian matrices. A number of algorithms for the latter have emerged [3–10] and in turn triggered various applica- tions ranging from broadband multiple-input and multiple- output (MIMO) systems [11, 12], to coding [13], beamform- ing [14, 15], source separation [16] and angle of arrival esti- mation [17, 18], to name but a few.

5 Read more

Show all 1683 documents...