information complexity

Top PDF information complexity:

Misspecified Multivariate Regression Models Using the Genetic Algorithm and Information Complexity as the Fitness Function

Misspecified Multivariate Regression Models Using the Genetic Algorithm and Information Complexity as the Fitness Function

Abstract. Model misspecification is a major challenge faced by all statistical modeling techniques. Real world multivariate data in high dimensions frequently exhibit higher kurtosis and heavier tails, asymmetry, or both. In this paper, we extend Akaike’s AI C-type model selection criteria in two ways. We use a more encompassing notion of information complexity (I COM P) of Bozdogan for multivariate regression to allow certain types of model misspecification to be detected using the newly proposed criterion so as to protect the researchers against model misspecification. We do this by employing the “sandwich”or “robust”covariance matrix F ˆ −1 R ˆ F ˆ −1 , which is computed with the sample kurtosis and skewness. Thus, even if the data modeled do not meet the standard Gaussian assumptions, an appro- priate model can still be found. Theoretical results are then applied to multivariate regression models in subset selection of the best predictors in the presence of model misspecification by using the novel genetic algorithm (GA), with our extended I COM P as the fitness function.
Show more

39 Read more

Copula Model Selection of Stock Return Time Series Using Information Complexity

Copula Model Selection of Stock Return Time Series Using Information Complexity

In this paper we estimate the correlation between four different stock return prices. To accomplish this, we use the copula models to study the dependency structure between the variables. The original variables of interest are mapped into more manageable variables by considering joint and marginal distributions of these variables. Then a correlational structure between these variables are obtained. We fit several well-known copula models to the portfolio of the stock return price dataset using consistent information complexity (CICOMP) criterion along with other AIC-type criteria to choose the best copula functional model. CICOMP predominated the AIC-type criterion, both in the case when the fitted models are correctly specified. We expect to get more realistic results using other copula distributions contrary to the Gaussian copula used by Li (2000) that fails to capture the dependence between extreme events.
Show more

14 Read more

Trading Information Complexity for Error

Trading Information Complexity for Error

Communication complexity studies the amount of communication needed to compute a function whose inputs are spread among several parties. It has many applications to different areas of complexity theory and beyond, mostly as a technical tool used for proving lower bounds. Traditionally, communication complexity has been studied through a combinatorial lens. Recently, information complexity, the approach to study communication complexity via information theory, has successfully been used to resolve some of the major problems of this field [11, 2, 3]. While communication complexity is concerned with minimizing the amount of communication required for two players to evaluate a function, information complexity is concerned with the amount of information that the communicated bits reveal about the players’ inputs.
Show more

73 Read more

Designed to Fail: Effects of the Default Option
and Information Complexity on Student Loan Repayment

Designed to Fail: Effects of the Default Option and Information Complexity on Student Loan Repayment

First, we ask whether it is simply the power of the default option. All borrowers are automatically placed into the Standard plan and remain in that plan unless they take active steps to change. A wealth of research on other topics suggests that under these circumstances the effect of the default option should be strong. Second, we ask whether and to what degree borrowers’ choices might also be driven by inaccurate information about their future earnings. Several studies suggest that students are overly optimistic about their employment and earnings prospects, potentially leading them away from income driven repayment plans that act as insurance against low earnings or unemployment. Third, we ask what role information complexity plays in suboptimal decision-making. At present borrowers are faced with no fewer than seven repayment options, each with its own complicated set of rules. Moreover, the Department of Education and allied non-profit student loan servicers present borrowers with a potentially overwhelming amount of information on repayment plans. We ask whether this complexity paralyzes or confuses borrowers, potentially reinforcing the power of the default option.
Show more

45 Read more

Performance of Information Complexity Criteria in Structural Equation Models with Applications

Performance of Information Complexity Criteria in Structural Equation Models with Applications

Since the information criteria used in the study were not applied to the just identified models, over-identified models were chosen. In accordance with the t-rule [19], one of the rules of identification must have free parameters under 28 to be over identified. Accordingly, three different models with free parameters, 17, 15, and 13 were compared to assess the information criteria. The path diagram of each of these three models, obtained from LISREL, are given in Figures 2, 3, and 4. Model 1 was fit to the data with 17 free parameters in Figure 2 with P − value = 0.29 > 0.05. In this model, the diagonal elements of the covariance of latent exogenous variables and a single path coefficient between latent and observed endogenous were fixed. The other parameters were estimated freely. Model 2 was fit to the data with 15 free parameters shown in Figure 3 with P − value = 0.42 > 0.05. It was restricted by setting to 1 a single path coefficient in each latent variable, and equating diagonal elements of the covariance matrix of latent variables. In the case of Model 3, the model was fit to the data with 13 free parameters shown in Figure 4 with P − value = 0.00009 < 0.05. It also was restricted by setting to 1 as single path coefficient in each latent variable, and equating the measurement errors of the observed variables belonging to each latent variable.
Show more

20 Read more

Data Mining with Multivariate Kernel Regression Using Information Complexity and the Genetic Algorithm

Data Mining with Multivariate Kernel Regression Using Information Complexity and the Genetic Algorithm

and K(·) is the kernel function that is positive, integrates to 1, and satisfies some regular- ity conditions [Parzen, 1962]. The parameter h > 0 is the window width that controls the tradeoff between smoothness and fidality of the estimate [Bozdogan, 2000]. The bandwidth h > 0 is the width of the bins the sample data are categorized into for density estimation. Choosing an h that is too small will result in a density that is too abrupt and fluctuating. However, choosing an h that is too large masks important features of the data by over- smoothing the density. Several techniques in the literature are presented in Section 3.3 for calculating the optimum bandwidth h without using information criteria or complexity.
Show more

267 Read more

Detecting malware with information complexity

Detecting malware with information complexity

Received: 20 April 2020; Accepted: 16 May 2020; Published: 20 May 2020    Abstract: Malware concealment is the predominant strategy for malware propagation. Black hats create variants of malware based on polymorphism and metamorphism. Malware variants, by definition, share some information. Although the concealment strategy alters this information, there are still patterns on the software. Given a zoo of labelled malware and benign-ware, we ask whether a suspect program is more similar to our malware or to our benign-ware. Normalized Compression Distance (NCD) is a generic metric that measures the shared information content of two strings. This measure opens a new front in the malware arms race, one where the countermeasures promise to be more costly for malware writers, who must now obfuscate patterns as strings qua strings, without reference to execution, in their variants. Our approach classifies disk-resident malware with 97.4% accuracy and a false positive rate of 3%. We demonstrate that its accuracy can be improved by combining NCD with the compressibility rates of executables using decision forests, paving the way for future improvements. We demonstrate that malware reported within a narrow time frame of a few days is more homogeneous than malware reported over two years, but that our method still classifies the latter with 95.2% accuracy and a 5% false positive rate. Due to its use of compression, the time and computation cost of our method is nontrivial. We show that simple approximation techniques can improve its running time by up to 63%. We compare our results to the results of applying the 59 anti-malware programs used on the VirusTotal website to our malware. Our approach outperforms each one used alone and matches that of all of them used collectively.
Show more

29 Read more

3D Scene and Object Classification Based on Information Complexity of Depth Data

3D Scene and Object Classification Based on Information Complexity of Depth Data

The normalized compression distance (𝑁𝐶𝐷) is a universal metric for comparison of general object descriptions such as music and genome data [20]. Unfortunately, the application of 𝑁𝐶𝐷 in image similarity measurement is not satisfactory [21], where it has been shown experimentally that 𝑁𝐶𝐷 cannot be universally applied to the images. In order to remedy this problem, we propose to use the EMD as Kolmogorov complexity of each sparse model. Estimating the Kolmogorov complexity of a model using a compression algorithm has some drawbacks such as sensitivity to repeated patterns and mapping of many representations to a fixed value in the complexity space. In other words, since a compression based complexity estimator is not a one to one mapping function, some problems arise in practice. The EMD has been widely used for distance measurement of histograms and feature descriptors. Since the EMD is not a normalized metric, we present a new definition for the Kolmogorov complexity based on the EMD in order to employ the 𝑁𝐶𝐷 as a normalized
Show more

8 Read more

Detecting Malware with Information Complexity

Detecting Malware with Information Complexity

NCD is an upper bound on information distance. The choice of compressor determines how tight this upper bound will be. Previ- ous research [ 8 ] found that the size of strings we want to compare and the size of the block or window that the compressor uses af- fect the values of NCD. Our own experiments confirm this finding and indicate that a compressor similar to 7-zip performs well (i.e. using 7-zip, NCD(x,x) is close to zero) for our domain, classifying malware and benign-ware. Figure 1 shows the results of our ex- periment. The x-axis is the size of the file while the y-axis is the NCD of the file to itself. We can clearly see that 7-zip outperforms the other three compressors (gzip, winzip and bzip2). The window size in 7-zip can be set to a maximum of 4GB, making it suitable to calculate NCD for two files with a combined size of up to 4GB.
Show more

12 Read more

Privacy fatigue: The effect of privacy control complexity on consumer electronic information disclosure

Privacy fatigue: The effect of privacy control complexity on consumer electronic information disclosure

As noted, one of the difficulties of researching information privacy controls—particularly with OSNs—is the task of collecting both actual consumer behavior along with experimental manipulation for establishing causality. To address this research gap, we consequently created a mobile app-based geocaching game that also included a website application and an online social network that players were incented to create and use. Participants included 568 undergraduates at a private university in the US. Geocaching is the sport of finding “treasures” (e.g. small objects, lists of finders, etc.) stored or “cached” in various geographic locations. Players use a GPS to find the caches. The mobile app (called “findamine” or “find.a.mine”) was available in both the Apple App Store™ and Google Play™. Findamine was created as a modified geocaching game in which players took pictures of themselves at the locations rather than finding small objects. In addition, rather than providing a latitude/longitude coordinate, players were given short, text-based clues which lead to the locations (e.g., “This building was built in 1973.”) or using the “hot-cold” meter that told them how close or far away they were from a site based on the real-time GPS coordinates provided by the player’s device.
Show more

18 Read more

A Complexity Reduction Technique for Multi Hop Relay Network With Out Signal Information

A Complexity Reduction Technique for Multi Hop Relay Network With Out Signal Information

n j CN 0,N 0 , j=1,2 are the noise element at Destination. Thus, the effect of synchronization error is collected into quantities g 20 and g 21 .Depending on the number of side-lobes of the pulse-shape filter, more terms may appear in (2). In our model, the small contributions of the side-lobes of p t   are neglected .if the information is not available (system under consideration), one can treat the ISI as noise. In this case, assuming that channel coefficients are constant during two consecutive blocks, the data symbols can be conventionally decoded as
Show more

7 Read more

Complexity over Uncertainty in Generalized Representational
Information Theory (GRIT): A Structure-Sensitive General
Theory of Information

Complexity over Uncertainty in Generalized Representational Information Theory (GRIT): A Structure-Sensitive General Theory of Information

Naïve informationalism offers a tenable explanation as to why scientists from a wide range of disciplines, from Physics to Psychology and from Biology to Computer Science use the term “information” to refer to specific types of knowledge that characterize their particular domain of research. For example, a data analyst may be interested in the way that data may be stored in a computing device, but has no interest in the molecular interactions of a physical system. Such molecular activity is not relevant to the problems and questions of interest in the field. One could say that, to the data analyst, the entities of interest are data. Accordingly, in the field of data analysis, the terms “information” and “data” are often used interchangeably to refer to the kinds of things that a computing device is capable of storing and operating on. Similarly, for some types of physicists, the quantum states of a physical system during a certain time window comprise information. On the other hand, a behavioral psychologist may be interested on the behaviors of rats in a maze. Indeed, to a behavioral psychologist the objects of information are these behaviors. In contrast, a geneticist may find such behaviors quite tangential to his discipline. Instead, to the geneticist, knowledge about the genome of the rat is considered far more fundamental and symbol sequences (e.g., nucleotides) may be a more useful way of generalizing and thinking about the basic objects of information. All of these examples support the idea that there are as many types of information as there are domains of human knowledge [1].
Show more

30 Read more

Restricted Complexity, General Complexity

Restricted Complexity, General Complexity

The International Ethical, Political and Scientific Collegium has formu- lated a declaration of interdependence which it would wish to see promul- gated by the United Nations. We must think the interdependence in all fields, including the complex relation between the parts and the whole. We need to be able to face uncertainties of life whereas nothing prepares us for it. We need to face complexity, including for action, whereas one opposes the cautionary principle to the risk principle, while Pericles had truly ex- pressed the union of the two antagonistic principles when he said during a speech to the Athenians during the Peloponnesian war: “we Athenians, we are capable of combining prudence and audacity, whereas the others are either timorous or bold”. It is the combination which we need. Also, precaution needs today sometimes much invention.
Show more

25 Read more

Completeness and Complexity of Information Available to Parents From Newborn-Screening Programs

Completeness and Complexity of Information Available to Parents From Newborn-Screening Programs

Only 13 of the programs informed parents about what steps should be taken in the event of a positive screening result. Typically, this was limited to a de- scription of the need for retesting. Because programs make a concerted effort to quickly identify and track children with positive results and because the child’s primary care physician is often responsible for noti- fying parents of the result, it could be argued that informing parents about the necessary steps after a positive test in the educational material provided at the time of screening is unnecessary. However, pro- viding this information might decrease parental anx- iety by emphasizing that an abnormal screen does not confirm disease and might enhance timely fol- low-up by underscoring the need for quick fol- low-up testing.
Show more

7 Read more

Policy Implications of Economic Complexity and Complexity Economics

Policy Implications of Economic Complexity and Complexity Economics

(experimenting, adapting) and endowed with some intentionality, to improve their benefits and solve related intricate problems. Such intentionality under relatively limited cognitive capacity would primarily include reducing the perceived complexity of their decision situations. This is where the emergence of social rules and institutions comes in. De- regulated “markets” , in fact, tend to reinforce (individually perceived) over-complexity and its systemic costs (e.g., Helbing 2013; Jones 2014; Battiston et al. 2015), with overly high turbulence (volatility), and transparency and stability may quickly become too small. Individual “knowability” or calculability of the systems’ dynamics and of a good individual choice therein then become highly restricted, given hu man brains’ capacities. 26 Thus, for policy, we will have to deal with proper complexity reduction of individual decision situations – without having any hope, though, to be able, or even to wish, to reduce the system complexity so far that global “perfect information” and “ certainty ” would result.
Show more

66 Read more

Differential Sensibility of Information Processing Capacity with Age: Effects of Physical Activity and Task Complexity

Differential Sensibility of Information Processing Capacity with Age: Effects of Physical Activity and Task Complexity

Regarding the age × complexity × inactivity interac- tion, our results showed no interaction with the deci- sion-making task. Unlike the study Abourezk and Tool [41], it seems that the complexity of the task may be an important factor in determining the relationship between physical activity and cognitive performance in elderly. Moreover, our results were not been able to reveal this larger effect of physical activity on complex tasks re- quiring expensive treatments sensible [42,43].

6 Read more

Complexity and nationalism

Complexity and nationalism

The answer is that markets are able to harness the distributed knowledge of individuals. Every person is both consumer and producer and has a different perspective on what they can supply and what their and others' demands are. The price mechanism acts to aggregate their variegated specialist knowledge, pooling this distributed knowledge to regulate what is produced, how much, where and for whom. Few would contest that such a system works better to produce private goods than one based on central control, which cannot know the tastes and capacities of millions of individuals in diverse niches nor coordinate production and distribution to effectively meet constantly shifting patterns of demand. In short, a ‘ spontaneous order ’ emerging from complexity has greater information processing power than a centrally directed order where knowledge is focused among expert planners. We see this in other spheres such as guessing the number of beans in a jar, where the aggregate of everyone's guess consistently approaches or beats the best guess or when betting markets routinely offer superior predictions to experts (Surowiecki 2004; Hayek 1994: 6).
Show more

21 Read more

Complexity and Entrepreneurship

Complexity and Entrepreneurship

Because of the market’s complexity, as the result of the division of labor as the source of wealth, planning is impossible. Nobody possesses all the present and future knowledge. Besides, what would be left of real human choices? Complexity theory explains how a complex order spontaneously emerges from the interaction of simple elements. A lack of knowledge about specific results is to be treated as an inherent feature of the market and not as a temporary problem of a lack of knowledge. However, some ideas look so respectable they can fail nine out of ten and still be used the tenth time. Other ideas look so improbably that though they do succeed nine times after each other still the tenth time are not trusted (Sowell, 2004). Government policy to steer the economy obviously belongs to the fist class of ideas and the impersonal market process to the second. In short, as far as government policy goes we must distinguish between what sounds good and what works.
Show more

10 Read more

Deconstructing Complexity: Configurations of Institutional Complexity and Structural Hybridity

Deconstructing Complexity: Configurations of Institutional Complexity and Structural Hybridity

multiple spheres governed by different dominant discourses, competing logics could be “played out in full, yet not together” (Zilber, 2011: 1553). Such field-level jurisdictional divisions, as Lounsbury’s (2007) study of the US mutual fund industry demonstrates, can be reinforced by geographical separation. The study showed that ‘trustee’ and ‘performance’ logics co-existed for decades, each separately guiding the behavior of professional money management firms respectively located in Boston and New York. In Boston, the trustee logic was dominant and the main goal of mutual fund firms was wealth preservation – achieved through conservative long-term investing strategies and hiring practices based on pedigree and propriety. In New York, by contrast, the performance logic was dominant and the emphasis was on short-term annualized returns – achieved through aggressive investing techniques and by hiring based on merit. Despite incompatible prescriptions regarding the mission, goals, and practices of mutual fund firms, the logics peacefully co-existed within the field for years because their proponents claimed authority over separate geographic domains. Geographic segregation, in other words, eased the experience of institutional complexity.
Show more

47 Read more

Complexity, BioComplexity, the Connectionist Conjecture and Ontology of Complexity

Complexity, BioComplexity, the Connectionist Conjecture and Ontology of Complexity

Information evolves. It evolves by making new interconnections with existing information. This is accommodated within the complete connectionist representation of totality where an entity or a new node gains existence i.e. meaning, when it establishes connections with atleast one of the existing nodes or entities. If this new node is a quantum of information (or knowledge), then it gets defined only when it establishes connection with existing nodes. The connectionist nature of totality triggers, through the interconnections of totality, gradient in information, which triggers the evolution of information and its subsequent creation, or space for its creation. This leads to the infinite recursive Information, where information becomes a completely open system, perpetually evolving. Thus the entire connectionist representation of totality, a complex network, evolves. Transformation of information- with evolution of nodes i.e quantum of information establishes new interconnections- thus achieving new meaning- thus getting transformed. Thus evolution, both, creates and transforms, information. Evolution transforms information either partially or completely (some interconnections changes OR all interconnections changes- with other nodes). This can also happen- creation or transformation or both- to entire sub-spaces. Identical or near identical information (in terms of connections i.e. similar or identical connections) can arise by completely different processes of evolution of information e.g. identical information evolving in different subspaces of the connectionist totality due to highly non-linear processes of evolution – thus leading very interestingly to dimensionality reduction. So evolution of information also triggers dimensionality reduction. Apparently, the probability of this event seems to be very less considering the enormous dimensionality of the metric space of the connectionist totality or even a large subspace of totality. Processes of evolution
Show more

43 Read more

Show all 10000 documents...