Those two maps have also been fully explored with the hope of generating pseudo-random numbers [6]. However the collapsing of iterates of dynamical systems or at least the existence of very short periodic orbits, their non constant invariant measure, and the easily recognized shape of the function in the phase space should lead to avoid the use of such one-dimensional map (logistic, baker, or tent, etc.) or two dimensional map (H´enon, standard or Belykh, etc.) as a pseudo-random number generator (see [7] for a sur- vey). However, the very simple implementation in computer program of **chaotic** dynamical systems led **some** authors to use it as a base of cryptosystem [8]. They are topologically conjugate, that means they have similar topological properties (distribution, chaoticity, etc.) however due to the structure of number in computer realization their numerical behaviour dif- fers drastically. Therefore the original idea here is to combine features of tent (T µ ) and logistic (L µ ) maps to achieve new

Show more
Partly convective cool stars possess an internal structure similar to that of the Sun, i.e. an inner radiative zone and an outer convective envelope supposedly separated by a tachocline. Hence, it is generally assumed that their magnetic fields — as revealed by activity or direct measurements — are generated by a solar-like dynamo. However, **some** cool partly-convective stars strongly differ from the Sun, either in depth of their convective zone or rotation rate, and the impact of these differences on their dynamo is mostly unknown. On the other hand, main sequence stars less massive than ∼ 0 . 35 M ⊙

Show more
Dynamical systems which present mixing behavior and that are highly sensitive to initial conditions are called **chaotic**. Small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for **chaotic** systems This effect popularly known as the as the butterfly effect, renders long-term prediction impossible in general. This happens even though these systems are deterministic, meaning that their future behavior is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable. Mastering the global properties of those dynamical systems is today a challenging issue we try to fix **exploring** several **topologies** of network of **coupled** maps.

Show more
15 Read more

146 Read more

DOI: 10.4236/wjm.2019.911016 246 World Journal of Mechanics scale-free network model whose degree distribution obeys a power-law distribu- tion [3], is proposed by Barabási and Albert in 1999. Since then, they make great progress of complex **networks** and become the theoretical basis of modern com- plex **networks**. Synchronization, as a collective dynamical behavior, is an impor- tant and interesting direction of complex **networks**. In the past two decades, synchronization of complex **networks** has extensively attracted increasing atten- tion and practical applications [4]-[10], such as parallel computing. However, all the reported cases examples are of continuous phase transitions.

Show more
14 Read more

Energy use in the global ICT industry is the largest and most significant con- tributor to its carbon footprint, both within the information as well as the communication technology domains. In the ICT sector power is essential to enable both task operations and accomplishment [64]. Electricity is needed to run services, applications and equipment; and for related activities, such as manufacturing and distribution [65]. As presented in the European Com- mission in [66], the total electricity usage of the ICT sector in the European Union was estimated to be 119 TWh in 2005, which corresponds to 4.3% of overall electricity consumption, or 0.6% of total energy consumption. Also, for the U.S., it is estimated that ICT’s share of electricity consumption was approximately 8% in 2008 [67]. The amount of energy consumed by the ICT sector is also increasing rapidly. Bilal et al. [68], for example, have observed that the energy consumption estimates for IT infrastructure for the year 2011 are double than those of 2006 because of the increase in the traffic and subse- quent increase in the network hardware. The contributors to the ICT energy footprint is considerable. Plepys [69] has estimated that Internet equipment consumed approximately 8% of the total power in the United States in 2002 with the prediction to 50% growth within a decade. As predicted in [70], the energy consumption growth of telecom **networks** in the coming years is in- creasing.

Show more
190 Read more

Key space is the total number of different keys that can be used in the encryption[14,15].There are six parameters in the improved **chaotic** equation,in theory,the key space of each parameter is 10 14 , due to the actual precision of computer, the key space of each parameter was 10 6 , so the key space of the two-dimensional **coupled** **chaotic** map is 1.0*10 36 .It has obvious superiority,and it is easier to implement the algorithm by using hardware. Simulation results show that , even under the condition of existing computer precision,the key space is large enough.And 10 35 =2 117 ,it means that, an attacker needs a 117-bit computer to decode the algorithm.

Show more
The **coupled** system is an example of CNNs, as described by Chua in his book [7]. According to Chua, all the CNNs have much in common as each cell can be a model from a biological, neurological, chemical or electronic system. Compared to other systems, electrical circuit net- works are simpler to build, therefore, provide a practical, low cost method to simulate the other **networks**. We study traveling wave solutions to this CNN system since the existence of such solutions is one of the most prominent features of the network. We notice that our system is one of the simplest generalizations of FitzHugh-Nagumo equation, which is a second order bistable PDE **coupled** with linear first order ODE. The slow system we consider has two complex eigen- values while in FitzHugh-Nagumos system the one-dimensional slow system has only one real eigenvalue.

Show more
89 Read more

However, it is necessary to assume, besides other factors, a specific network topology before network calculus can be applied. This might be difficult in many application scenarios as the exact routing topology often cannot even roughly be known beforehand. As a very obvious example imagine that the sensor nodes are dropped from a plane. Nevertheless, **some** parameters restricting the resulting topology might be known. The number of nodes in a sensor field or the maximum hop- distance in the field might be examples for such restricting fac- tors. In particular, such restrictions might be enforced by care- ful topology control of the sensor network, as for example in [5]. While our proposal does not depend on such restrictions it will be discussed how much a worst case dimensioning can benefit from such prerequisites.

Show more
Let G = (V ,E) be a graph with a distinguished set of terminal vertices K ⊆ V. We deﬁne the K-diameter of G as the maximum distance between any pair of vertices of K. If the edges fail randomly and independently with known probabilities (vertices are always operational), the diameter-constrained K-terminal reliability of G, R K (G,D), is deﬁned as the probability that surviving edges span a subgraph whose K-diameter does not exceed D. In general, the computational complexity of evaluating R K (G,D) is NP-hard, as this measure subsumes the classical K-terminal reliability R K (G), known to belong to this complexity class. In this note, we show that even though for two terminal vertices s and t and D = 2, R {s,t} (G,D) can be determined in polynomial time, the problem of calculating R {s,t} (G,D) for ﬁxed values of D, D ≥ 3, is NP-hard. We also generalize this result for any ﬁxed number of terminal vertices. Although it is very unlikely that general eﬃcient algorithms exist, we present a recursive formulation for the calculation of R {s,t} (G,D) that yields a polynomial time evaluation algo- rithm in the case of complete **topologies** where the edge set can be partitioned into at most four equi-reliable classes.

Show more
12 Read more

We have applied our iterative decomposition algorithm to a realistic example of a backbone network, namely, the NSFNET irregular topology shown in Figure 7. Since we will be using trac data reported in [5], following that study, we have also augmented the 14-node NSFNET topology by adding two ctitious nodes, nodes 1 and 16 in Figure 7, to capture the eect of NSFNET's connections to Canada's communication network, CA*net. The resulting topology consists of 16 nodes and a total of 240 source-destination pairs. As in the previous subsection, we have decided to present detailed results for the call blocking probabilities of only a small number of pairs, and to summarize the results for the whole network. Specically, we present detailed results for the blocking probabilities of calls involving nodes along the path (3,5,6,7,9,12,15,16). (We note, however, that the shortest path used by **some** of these calls is not a sub-path of (3,5,6,7,9,12,15,16); for instance, the shortest path for calls between nodes 3 and 15 is (3,5,11,15).) The 28 source-destination pairs in this path, along with the corresponding shortest path lengths and the labels used in Figures 8 through 11 are shown in Table 4.

Show more
28 Read more

This paper addresses the throughput problem for large sensor **networks** with Rayleigh fading channels. To provide insight on the impact of the topology on the network per- formance, we compare **networks** with a random topology and three regular **topologies**. Placing nodes in regular lat- tices has an obvious advantage in terms of coverage [16], so we are not addressing coverage issues here. We define the (per-link) throughput as the expected number of successful packet transmissions of a given link per timeslot. The end-to- end throughput over a multihop connection, defined as the minimum of the throughput values of the links involved, is a performance measure of a route and the MAC scheme.

Show more
11 Read more

Martin Atzmueller is Associate Professor at the Department of Cognitive Science and Artificial Intelligence at Tilburg University as well as Visiting Professor at the Université Sorbonne Paris Cité. He earned his habilitation (Dr. habil.) in 2013 at the University of Kassel, where he also was appointed as adjunct professor (Privatdozent). Further, he received his Ph.D. (Dr. rer. nat.) in Computer Science from the University of Würzburg in 2006. He studied Computer Science at the University of Texas at Austin (USA) and at the University of Würzburg where he completed his MSc in Computer Science. His research areas include data science, data mining network analysis, wearable sensors and big data. He has published more than 200 scientific articles in top venues, e.g., the International Joint Conference on Artificial Intelligence (IJCAI), the European Conference on Machine Learning and Principles and Practice on Knowledge Discovery in Databases (ECML PKDD), the IEEE Conference on Social Computing (SocialCom), the ACM/IEEE International Conference on Advances in Social **Networks** Analysis and Mining (ASONAM), the ACM International Conference on Information and Knowledge Management (CIKM) and the ACM Conference on Hypertext and Social Media (HT). He is the winner of several Best Paper and Innovation Awards. He regularly acts as PC member of several top-tier conferences and as co-organizer on a number of international workshops, conferences, and tutorials on the topics of data science and network science, in particular on community detection and mining attributed **networks**. He can be contacted at m.atzmuller@uvt.nl, and his web site is at https://martin.atzmueller.net. Contact info: Tilburg University, Department of Cognitive Science and Artificial Intelligence, Warandelaan 2, 5037 AB Tilburg, Netherlands, Tel: +31-(0)13 466 4736, m.atzmuller@uvt.nl

Show more
13 Read more

Clustering ensembles combine multiple partitions of data into a single clustering solution. It is an effective technique for improving the quality of clustering results. Current clustering ensemble algorithms are usually built on the pairwise agree- ments between clusterings that focus on the similarity via consensus functions, between data objects that induce similarity measures from partitions and re-cluster objects, and between clusters that collapse groups of clusters into meta-clusters. In most of those models, there is a strong assumption on IIDness (i.e. independent and identical distribution), which states that base clusterings perform independently of one another and all objects are also independent. In the real-world, however, objects are generally likely related to each other through features that are either explicit or even implicit. There is also latent but definite relationship among intermediate base clusterings because they are derived from the same set of data. All these demand a further investigation of clustering ensembles that explores the interdependence characteristics of data. To solve this problem, a new **coupled** clustering ensemble (i.e. CCE) framework that works on the interdependence nature of objects and intermediate base clusterings is proposed in this paper. The main idea is to model the coupling relationship between objects by aggregating the similarity of base clusterings, and the interactive relationship among objects by addressing their neighbor- hood domains. Once these interdependence relationships are discovered, they will act as critical supplements to clustering ensembles. We verified our proposed framework by using three types of consensus function: clustering-based, object-based, and cluster-based. Substantial experiments on multiple synthetic and real-life benchmark data sets indicate that CCE can ef- fectively capture the implicit interdependence relationships among base clusterings and among objects with higher clustering accuracy, stability, and robustness compared to 14 state-of-the-art techniques, supported by statistical analysis. In addition, we show that the final clustering quality is dependent on the data characteristics (e.g. quality and consistency) of base clus- terings in terms of sensitivity analysis. Finally, the applications in document clustering, as well as on the data sets with much larger size and dimensionality, further demonstrate the effectiveness, efficiency, and scalability of our proposed models. CCS Concepts: rComputing methodologies → Ensemble methods; Learning latent representations; rInformation sys- tems → Clustering; rApplied computing → Document analysis;

Show more
35 Read more

A general discrete dynamical system is sometimes deﬁned as a pair (X, f ) con- sisting of a set X together with a continuous map f from X into itself. The subject dynamical system has its roots in classical mechanics, where X is taken as the set of all possible states of a system and the transformation f is the time evolution map. **Chaotic** dynamical systems constitute a special class of dynamical systems. During the last four decades discrete dynamical systems, in particular **chaotic** dynamical systems, have been studied extensively. Although there is no universally accepted mathematical deﬁnition of chaos, it is generally believed that if for a system the dis- tance between two nearby points increases and the distance between two far away points decreases with time, the system is said to be **chaotic**. The ﬁrst mathematical deﬁnition of chaos was given by Li and Yorke [8] in 1975. Robinson’s chaos [9] is another type of chaos. Later, Devaney [4] characterized chaos in a somewhat diﬀer- ent way. Devaney’s deﬁnition of chaos is one of the most popular and most widely known deﬁnitions of chaos for the discrete dynamical systems. The three conditions of Devaney’s deﬁnition are i) topological transitivity, ii) denseness properties of the set of periodic points and iii) sensitive dependence on initial conditions. Later it was shown by Banks et al [1] that conditions i) and ii) together imply condition iii). Although **chaotic** behaviors of continuous maps in general metric spaces are diﬃcult to study, **some** progress has been made in this direction during the last three decades. Most of these research papers are concerned with compact metric

Show more
this information, the routing algorithm [1,30] shows high performance for the communication network. Thus, we expect that the performance of the **chaotic** routing algo- rithm [14-19] is also enhanced if the node obtains not only the shortest distance information but also the wait- ing-time information at adjacent nodes. Further, using memory information obtaining from the refractory effect of the **chaotic** neuron, the improved **chaotic** routing algo- rithm may show better performance than the Echenique routing algorithm [1,30] by introducing the waiting-time information at adjacent nodes. From the above view- points, we improved the previous **chaotic** routing algo- rithm [14-19] by introducing the shortest distance infor- mation and the waiting time information at adjacent nodes [20]. Then, we confirmed that the improved cha- otic routing algorithm has high performance in complex **networks** such as small-world **networks** [31] and scale- free **networks** [32]. However, in the previous works [14- 20], we evaluated the **chaotic** routing algorithm for ideal communication **networks**, wherein each node has equal transmission capability for routing the packets and equal buffer size for storing the packets. To check whether the **chaotic** routing algorithm is practically applicable, it is important to evaluate its performance under realistic conditions. In 2007, M. Hu et al. proposed a realistic communication network in which the largest storage ca- pacity and processing capability were introduced [5]. Newman et al. proposed scale-free **networks** with com- munity structures [33]; these **networks** effectively ex- tract communities in real complex **networks** using the shortest path betweenness. In addition, the scale-free **networks** [33] have a common structure in real complex **networks** such as collaboration **networks** or communica- tion **networks**. In this study, we evaluate the **chaotic** routing algorithm for communication **networks** [5,33] to which realistic conditions are introduced. We confirmed that the **chaotic** routing algorithm by effectively pre- venting the congestion of packets, exhibits better per- formance than the conventional routing algorithms. Further, results indicate that the improved **chaotic** rout- ing algorithm can be realized in low-cost communica- tion **networks** in contrast to other conventional routing algorithms.

Show more
On the other hand, due to the transmission speed of signals or information between neurons is ﬁnite, neural **networks** with coupling delay should be considered. Motivated by the above discussions, in this paper, we investigate the impulsive synchronization of drive- response **chaotic** delayed neural **networks**. Firstly, we give **some** suﬃcient conditions for achieving synchronization, from which we can easily estimate the largest impulsive inter- vals for given neural **networks** and impulsive gains. Secondly, we adopt adaptive strategy to design adaptive impulsive controllers for relaxing the restrictions. Noticeably, the de- signed controllers are universal for diﬀerent neural **networks**. Finally, we perform **some** numerical examples to verify the obtained results.

Show more
11 Read more

Despite groups of points cannot be correlated to anchors in the LAMP scatterplot, it still valid to explain them in terms of variable ranges.. Adapted from Pagliosa and Telea ( 2019 ).[r]

37 Read more

Topology mapping is a well-known action in operating virtual network that allows mapping an arbitrary virtual network topology on the fix topology of the physical layer. In a virtualization environment, topology mapping stands for expressing a requested VNet topology sent to the service providers in terms of specific layout patterns of the interconnections of network elements, such as links and nodes, along with a set of specific service-oriented constraints, such as CPU capacity and link bandwidth [ 23 ]. Various parameters are considered in designing a topology mapping algorithm. Despite the freedom that this approach provides, it could impose a considerable overhead in both mapping action itself and also in the operation of the network because of non-optimal solution obtained from a particular algorithm, especially when many VNets are mapped on the same physical layer. This disadvantage could be very significant in the case of big physical layer **networks** that are more popular in resource sharing approaches such as cloud computing. In the topology mapping, there is always a downward mapping between the virtual layer elements and the physical layer elements. This one-way approach would result in cases in which the mapped topology on the physical layer and the original requested topology have a significant topological distance despite having a negligible distance in terms of requested service-oriented constraints. This disparity, which is the result of unawareness of the VNet about the actual physical topology, could result in a premature default of the service because of component failures or DC traffic congestion.

Show more
16 Read more

The established network reconstruction algorithms for reconstruction of signalling **networks** using phosphorylation data in response to external stimuli typically solve a combinatorial, mixed-integer optimization problem in order to minimize the error of a network-based signalling model with given experimental data. Nodes represent target proteins and edges (connections between nodes) represent the cascade direction of stimulated protein phosphorylation. However, if the number n of network nodes increases, then the number of potential **networks** to be analyzed will increase at least exponentially with n. Thus, any algorithm using an exhaustive search analyzing all possible **networks** with n nodes will become impractical even at modest n. Since most mechanisms which are relevant for applications involve multiple pathways and their crosstalk, there is a need for algorithms which avoid the pitfalls of detailed network reengineering in only one step.

Show more
20 Read more