Entropy and Information

Top PDF Entropy and Information:

A Form of Information Entropy

A Form of Information Entropy

In this paper, by axiomatic way, a form of information entropy will be presented on crisp and fuzzy setting. Information entropy is the unavailability of information about a crisp or fuzzy event. It will use measure of information defined without any probability or fuzzy measure: for this reason it is called general information.

5 Read more

Entropy Region and Network Information Theory

Entropy Region and Network Information Theory

A linear combination of the joint entropies of n random variables which is positive for all the entropy vectors in the Γ ∗ n is referred to as a linear information inequality for the entropies. Linear information inequalities which follow from the positivity of conditional mutual information are known as Shannon-type inequalities [ZY98]. Although for up to 3 random variables the entropy region is completely characterized by a finite set of Shannon-type inequalities, the full characterization of the region for 4 or more number of random variables involves non-Shannon information inequalities [ZY97, MMRV02, Zha03, DFZ06a] and remains a challenging problem. In fact it is proven that no finite set of linear inequalities can completely characterize Γ ∗ n for n ≥ 4 [Mat07a]. In other words the region is not a polytope for n ≥ 4, in spite of the fact that the closure of the entropy region is known to be a convex cone for all n. In summary for n ≥ 4, only partial characterization of the entropy region through inner or outer bounds, exist. From the network problem perspective, inner bounds of the entropy region are interesting in that they yield achievable rates. Yet, an approach that can be easily extended to any number of random variables for obtaining an inner bound is missing. This thesis takes a step in this direction by constructing an achievable entropy region through different methods.

259 Read more

Research on Fuzzy C Means Algorithm Based on the Information Entropy

Research on Fuzzy C Means Algorithm Based on the Information Entropy

According to the theory of information entropy, information entropy is the measurement of uncertainty of information [7]. If the entropy value is smaller which shows that the attribute's role is greater, then the weight of this attribute should be bigger naturally. It means that the entropy evaluation method can be used to determine the weight of each attribute. In the same way, for the sth attribute of given set of samples, if the information entropy of this attribute is bigger which shows that the difference of this attribute between samples is not significant, then the role of this attribute is smaller in the sample clustering. On the other hand, if the information entropy of the attribute is smaller, then the role of the attribute is greater in the sample clustering. According to the principle that the entropy value of the attribute and it’s attribution is opposite, the deviation coefficient of the sth attribute can be described:

6 Read more

LOGICAL ENTROPY: INTRODUCTION TO CLASSICAL AND QUANTUM LOGICAL INFORMATION THEORY

LOGICAL ENTROPY: INTRODUCTION TO CLASSICAL AND QUANTUM LOGICAL INFORMATION THEORY

Abstract: Logical information theory is the quantitative version of the logic of partitions just as logical probability theory is the quantitative version of the dual Boolean logic of subsets. The resulting notion of information is about distinctions, differences, and distinguishability, and is formalized using the distinctions (‘dits’) of a partition (a pair of points distinguished by the partition). All the definitions of simple, joint, conditional, and mutual entropy of Shannon information theory are derived by a uniform transformation from the corresponding definitions at the logical level. The purpose of this paper is to give the direct generalization to quantum logical information theory that similarly focuses on the pairs of eigenstates distinguished by an observable, i.e., qudits of an observable. The fundamental theorem for quantum logical entropy and measurement establishes a direct quantitative connection between the increase in quantum logical entropy due to a projective measurement and the eigenstates (cohered together in the pure superposition state being measured) that are distinguished by the measurement (decohered in the post-measurement mixed state). Both the classical and quantum versions of logical entropy have simple interpretations as "two-draw" probabilities for distinctions. The conclusion is that quantum logical entropy is the simple and natural notion of information for quantum information theory focusing on the distinguishing of quantum states.

24 Read more

Information  Entropy  Based  Leakage  Certification

Information Entropy Based Leakage Certification

The accuracy of a leakage model plays a very important role in side-channel attacks and evaluations. In this paper, we aim to determine the true leakage model of a chip. To achieve this, we performed Maximum Entropy Distribution (MED) estimation on higher- order moments of measurements to approximate the true leakage model of devices rather than assume a leakage model. Then, non-linear programming is used to solve the Lagrange multipliers. The MED is the most unbiased, objective and reasonable probability density distribution estimation that is built on known moment information. It does not include the profiler’s subjective knowledge of the model. MED can well approximate the true distribution of the leakage of devices, thus reducing the model assumption error and estimation error. It can also well approximate the complex distribution (e.g. non-gaussian distribution). Both theoretical analysis and experimental results verify the feasibility of our proposed MED.

23 Read more

Triparametric self information function and entropy

Triparametric self information function and entropy

[9] and some others have solved some typical functional equations and have used their solutions as entropy, inaccuracy, directed divergence etc., In the capacity of finite measures only in complete probability distributions. The method of averaging self-informations includes the case of generalized probability distributions. Moreover, we have discussed in this paper, information measures in the capacity of even an infinite range, because a parameter can have negative values also corresponding to phenomenal circumstances. Further since it is uncertain and difficult to choose an arbitrary functional equation and to find its suitable solutions to be used as information measures, it becomes easier if we choose any suitable parametric self-information function that can satisfy a number of effective boundary conditions. We have given a most simple and general choice in (1.2). Section 2 describes a triparametric entropy from which other familiar entropies have been deduced as par- ticular cases. We have given a number of this entropy in section 3 as joint entropy, triparmetric information functions, generalized information function, generalized inaccuracy, a new information called information de- viation and lastly generalizations of Kullback’s information.

6 Read more

Spectral Clustering with Neighborhood Attribute Reduction Based on Information Entropy

Spectral Clustering with Neighborhood Attribute Reduction Based on Information Entropy

Abstract—Traditional rough set theory is only suitable for dealing with discrete variables and need data preprocessing. Neighborhood rough sets overcome these shortcomings with the ability to directly process numeric data. This paper modifies the attribute reduction method based on neighborhood rough sets, in which the attribute importance is combined with information entropy to select the appropriate attributes. When multiple attributes have the same importance degree, compare the information entropy of these attributes. Put the attribute having the minimal entropy into the reduction set, so that the reduced attribute set is better. Then we introduce this attribute reduction method to improve spectral clustering and propose NRSR- SC algorithm. It can highlight the differences between samples while maintaining the characteristics of data points to make the final clustering results closer to the real data classes. Experiments show that, NRSR-SC algorithm is superior to traditional spectral clustering algorithm and FCM algorithm. Its clustering accuracy is higher, and has strong robustness to the noise in high-dimensional data.

9 Read more

Cryptanalysis of a chaotic image encryption algorithm based on information entropy

Cryptanalysis of a chaotic image encryption algorithm based on information entropy

Plaintext sensitivity is very important for high-strength image encryption schemes as a plain-image and its slightly modified version (embedded by a watermark or some hiding messages) are often encrypted at the same time. If the used encryption scheme does not satisfy the sensitivity requirement, leakage of the cipher-image corresponding to one of the two similar plain-image may disclose the visual information of the other. In the field of image security, two metrics UACI (unified averaged changed intensity) and NPCR (number of pixels changing rate) are widely used to measure plaintext sensitivity. Unfortunately, the validity of the two metrics has been questioned in [30] by statistical information of the outputs of some insecure encryption schemes. Here, we emphasize that the internal structure of IEAIE cannot perform well to achieve the expected plaintext sensitivity. Observing the encryption procedure of IEAIE, one can see that all involved operations can make every operated bit ‘run’ from the least significant bit (LSB) to the most significant bit (MSB), not the opposite order. Concretely, the change of a bit in the i-th bit-plane (counted from the LSB to MSB) can only influence the bits in the i ∼ 8-th ones. So, the influence scope of every bit of the plaintext on the corresponding cipher-text is dramat- ically different. No matter how many round numbers are repeated, this problem remains to exist [32]. The designers of IEAIE claimed that “the keystreams are different with respect to different plain-images” based on the assumption of high sensitivity of information entropy on change of the plain-image. However, as we have explained above, this assumption is not correct. In all, the statement “a slight change in the plain-image leads to a completely different cipher-image” in [27, Sec. 3.2.2] is incorrect.

11 Read more

Entropy-information perspective to radiogenic heat distribution in continental crust

Entropy-information perspective to radiogenic heat distribution in continental crust

Another approach to estimating the depth distribution of radiogenic sources, considered as non-random, is through the application of extended entropy information theoretic models (Kapur and Kesavan, 1992; Woodbury, 2012). In an interest- ing study, Singh (2010) used entropy maximization methods to constrain various infiltration models. Woodbury (2012) has clarified some subtle issues about the number of con- straints used in the Singh’s formulation in deriving the in- filtration equation. In this paper we will use Woodbury’s formalism to constrain the depth distribution of radiogenic sources.

6 Read more

An Algorithm for K LVQ Abnormal Traffic Classification Based on Information Entropy

An Algorithm for K LVQ Abnormal Traffic Classification Based on Information Entropy

In conclusion, K-Means clustering method is efficient and feasible, but the accuracy is not high, the LVQ algorithm has powerful nonlinear processing ability, the dimension is not sensitive, the advantage of global optimal solution, but directly to all the data for multi classification algorithm has high time and space complexity problems by LVQ method. Therefore, this paper proposed an algorithm for K-LVQ traffic anomaly classification based on information entropy, the information entropy is used to quantify the flow data of different dimensions of unity, and on the basis of the first use of K-Means clustering algorithm to the training data set into several subsets of independent training, training process to reduce the total complexity, so as to improve the training speed, and then use LVQ methods to identify specific exception types. Only when the first cluster error or the new abnormal type occurs, the optimized one to many LVQ classification method is used to discriminate the abnormal type, which makes full use of the advantages of the two methods.

11 Read more

Entropy and Information Theory   Robert M  Gray pdf

Entropy and Information Theory Robert M Gray pdf

Information theory or the mathematical theory of communication has two primary goals: The first is the development of the fundamental theoretical lim- its on the achievable performance when communicating a given information source over a given communications channel using coding schemes from within a prescribed class. The second goal is the development of coding schemes that provide performance that is reasonably good in comparison with the optimal performance given by the theory. Information theory was born in a surpris- ingly rich state in the classic papers of Claude E. Shannon [129] [130] which contained the basic results for simple memoryless sources and channels and in- troduced more general communication systems models, including finite state sources and channels. The key tools used to prove the original results and many of those that followed were special cases of the ergodic theorem and a new vari- ation of the ergodic theorem which considered sample averages of a measure of the entropy or self information in a process.

306 Read more

Computational  Entropy   and  Information  Leakage

Computational Entropy and Information Leakage

Average-case entropy works well in situations in which not all leakage is equally informative. For instance, in case the leakage is equal to the Hamming weight of a uniformly distributed string, some- times the entropy of the string gets reduced to nothing (if the value of the leakage is 0 or the length of the string), but most of the time it stays high. For the information-theoretic case, it is known that deterministic leakage of λ bits reduces the average entropy by at most λ [DORS08, Lemma 2.2(b)] (the reduction is less for randomized leakage). Thus, our result matches the information- theoretic case for deterministic leakage. For randomized leakage, our statement can be somewhat improved (Theorem 3.4.3).

21 Read more

Research on portfolio model based on information entropy theory

Research on portfolio model based on information entropy theory

3. IMPROVED PORTFOLIO OPTIMIZATION MODEL BASED ON INFORMATION ENTROPY In the process of securities investment, in order to effectively avoid risks, investors often use the method of portfolio investment. Information entropy optimization model is introduced in this article can help investors make better optimization decisions in the case of asymmetric information. But in actual operation, investors will encounter more risk. For example, payment transaction costs will affect the overall revenue in the entire transaction process. Therefore, it is necessary to analyze the transaction costs of the whole process. Transaction costs were added to the model, so that the results would be more realistic.

5 Read more

Entropy Search for Information-Efficient Global Optimization

Entropy Search for Information-Efficient Global Optimization

Figures in previous sections provided some intuition and anecdotal evidence for the efficacy of the various approximations used by Entropy Search. In this section, we compare the resulting algorithm to two Gaussian process global optimization heuristics: Expected Improvement, Probability of Im- provement (Section 1.1.3), as well as to a continuous armed bandit algorithm: GP-UCB (Srinivas et al., 2010). For reference, we also compare to a number of numerical optimization algorithms: Trust-Region-Reflective (Coleman and Li, 1996, 1994), Active-Set (Powell, 1978b,a), interior point (Byrd et al., 1999, 2000; Waltz et al., 2006), and a na¨ıvely projected version of the BFGS algorithm (Broyden, 1965; Fletcher, 1970; Goldfarb, 1970; Shanno, 1970). We avoid implementation bias by using a uniform code framework for the three Gaussian process-based algorithms, that is, the algo- rithms share code for the Gaussian process inference and only differ in the way they calculate their utility. For the local numerical algorithms, we used third party code: The projected BFGS method is based on code by Carl Rasmussen, 3 the other methods come from version 6.0 of the optimization toolbox of MATLAB . 4

29 Read more

Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

Consider a finite and fix number of neurons N and a potential of range 1. This case includes the Ising model [5], Triplets [9], K-pairwise [7] and all other memoryless potentials that has been used in the context of maximum entropy models of spike train statistics. It represent a limit case in the definition of the transfer matrix, where transitions between spike patterns σ → σ 0 ; σ, σ 0 ∈ Σ 1 N are considered and all transitions are allowed. In this case, the potential does not “see” the past i.e. L H ( σ, σ 0 ) = e H(σ

20 Read more

Bayesian System Identification of MDOF Nonlinear Systems using Highly Informative Training Data

Bayesian System Identification of MDOF Nonlinear Systems using Highly Informative Training Data

The aim of this paper is to utilise the concept of ‘highly informative training data’ such that, using Markov chain Monte Carlo (MCMC) methods, one can apply Bayesian system identification to multi-degree-of- freedom nonlinear systems with relatively little computational cost. Specifically, the Shannon entropy is used as a measure of information content such that, by analysing the information content of the posterior parameter distribution, one is able to select and utilise a relatively small but highly informative set of train- ing data (thus reducing the cost of running MCMC).

10 Read more

Some Features on Entropy Squeezing for Two Level System with a New Nonlinear Coherent State

Some Features on Entropy Squeezing for Two Level System with a New Nonlinear Coherent State

Entropy squeezing is an important feature in performing different tasks in quantum information processing such as quantum cryptography and superdense coding. These quantum information tasks depend on finding the states in which squeezing can be created. In this article, a new feature on entropy squeezing for a two level system with a class of nonlinear coherent state (NCS) is ob- served. An interesting result on the comparison between the coherent state (CS) and NCS is ex- plored. The influence of the Lamb-Dick parameter in both absence and presence of the Kerr me- dium is examined. A rich feature of entropy squeezing in the case of NCS, which is observed to de- scribe the motion of the trapped ion, has been obtained.

11 Read more

Towards a Framework for Observational Causality from Time Series: When Shannon Meets Turing

Towards a Framework for Observational Causality from Time Series: When Shannon Meets Turing

Because x − and y − are the only parents of the output y, it follows from the Causal Markov Condition that the associated channel is memoryless. This mutual information quantifies the amount of information that is transmitted over the g th sub-channel. The transfer entropy from Equation (3) can now be expressed as

22 Read more

COMPARATIVE PERFORMANCE EVALUATION OF XYZ PLANE BASED SEGMENTATION AND ENTROPY BASED SEGMENTATION FOR PEST DETECTION

COMPARATIVE PERFORMANCE EVALUATION OF XYZ PLANE BASED SEGMENTATION AND ENTROPY BASED SEGMENTATION FOR PEST DETECTION

corresponding segment by maximizing the total information and minimizing the difference between the information contained in foreground or background surface. [7] Primary image is taken from the camera which is generally, in RGB format. Other color spaces like YUV, YIQ, HSI, I 1 I 2 I 3 ,

11 Read more

Article Description

Article Description

Abstract:––Pattern recognition is a very growing field in computer science. This research describes a technique for the assessment of information contents of pattern. It discusses the use of information theory measures in pattern recognition problems. Also, an iterative techniques such as fixed point iterative method or binary search method are ideal to be used to find pattern. Information theory is closely associated with a collection of pure and applied disciplines that have been investigated and reduced to engineering practice under a variety of topics throughout the world over the past half century or more: adaptive systems, anticipatory systems, artificial intelligence, complex systems, complexity science, cybernetics, informatics, machine learning, along with systems sciences of many descriptions. Information theory is a broad and deep computer and mathematical theory, with equally broad and deep applications, amongst which is the vital field of coding theory. This research applies a technique used to assess the information to the recognition capability and to increase the efficiency of the pattern recognition related with information theory. Entropy and conditional entropy have been used for learning models and designing inference algorithms. Many techniques and algorithms are developed over the last decades. Most of them involve extraction some of the information that describes a pattern. Results obtained that the computed orientation information contents agree with the theory, which is in the limit goes to zero, in case of orientation pattern and that the information contends depend strongly on size and information. Using fixed point iteration technique is new method to be used to find area of pattern depending on shifting and matching. Application of the entropy, fundamentals of information theory, assessment of the translational and rotational information contents of patterns, and assessment of the total information contents used in this technique prove that this technique is suitable for recognition problems and that information theory measures, are an important tool for pattern recognition using iterative fixed foint application.

5 Read more

Show all 10000 documents...

Related subjects