Top PDF Computational topology algorithms for discrete 2-manifolds

Computational topology algorithms for discrete 2-manifolds

Computational topology algorithms for discrete 2-manifolds

Isosurfaces Although scalar volumes allow for the visualization of 3D structures, typically a user is in- terested in manipulating a surface instead of the entire volume of data. For example, a doctor may want to only visualize a specific layer of skin or tissue from a medical MRI scan. An isosurface is the surface defined by a specific scalar value. Specifically, consider an implied surface intersected by the volume grid. This intersection and the entire grid can be represented by tuples (i, F (i)), where i is a point in 3D space and F (i) is the scalar value of the volume at that point in space. Without loss of generality we assume that the surface we are interested in is the zero isocontour of the volume. The surface will be pierced by the edges and faces of the grid, creating a collection of patches each of which we denote as a surfel, for surface element. Within each cube of the grid, an isosurface generation algorithm (e.g., [54] or [60]) defines the set of surfels [78], (see Figure 2.1.3). Each cube may have up to 4 surfels. The surfels from all cubes together form a discrete representation of the isosurface. We use the connectivity rules of Lachaud [54] due to the fact that they produce a closed oriented surface without singularities or self-intersections [54]. Lachaud’s table has proven properties by restricting data to well defined interior and exteriors, i.e., for a scalar function F(i), the interior is defined as F (i) < 0, while exterior is defined as F (i) ≥ 0. This is similar to a standard general position argument, and creates a well defined isosurface, i.e., the surface is perturbed away from the volume grid nodes. Isosurfaces are a fundamental data type for geometric modeling. Such a surface can either be extracted from the volume and manipulated as a triangle mesh, or the data structure of the volume itself can be manipulated, affecting the isosurface within.
Show more

110 Read more

A  Simplified  Approach  to  Rigorous  Degree 2  Elimination  in  Discrete  Logarithm  Algorithms

A Simplified Approach to Rigorous Degree 2 Elimination in Discrete Logarithm Algorithms

The main computational task is to determine the vector of exponents E. In order to do that, [2] introduced the Zig-Zag method. The idea is to first write g r h s as a product of elements of the form θ − C with C in the union of a larger field and E. From this point, each θ − C is rewritten as products of the same form in progressively smaller fields. The basic building block called degree 2 elimination, takes an element θ − C with C in F Q 2

11 Read more

Generation of Catalogues of PL n-manifolds: Computational Aspects on HPC Systems

Generation of Catalogues of PL n-manifolds: Computational Aspects on HPC Systems

In particular, since each compact (topological) 3-manifold admits a PL structure and any two PL structures on the same topological 3-manifold are equivalent (i.e. PL-homeomorphic, see [2]), the study of triangulations of PL 3-manifolds is naturally related to the problem of classification, which is still one of the main topics of 3-dimensional topology. The possibility of representing manifolds by combinatorial structures, together with recent advances in computing power, enabled topologists to construct exhaustive tables of small (i.e. obtained by a small number of simplices) 3-manifolds based on different representation methods. In the closed case (i.e. compact and without boundary), catalogues have already been produced and analysed by many authors [8, 19, 20], with a particular focus on combinatorial properties of minimal triangulations.
Show more

11 Read more

Reducing Computational Time of Basic Encryption and Authentication Algorithms

Reducing Computational Time of Basic Encryption and Authentication Algorithms

Signature Standard (DSS).The Digital Signature Algorithm (DSA) is public key techniques which developed by ElGamal and Schnorr, is based on the problem of discrete logarithms. The DSA techniques based on computing “exponentiation modulo” which is a large prime number p. The key size length is length of prime i.e. 512 or 1024 bits. The size of exponents used for exponentiation is an important security parameter for DSA. For DSA, the exponent size is fixed at 160 bits. The General Number Field Sieve is sthe best attack known to break the size of DSA algorithm [2].

5 Read more

Computational algorithms for the global stability analysis of driven oscillators

Computational algorithms for the global stability analysis of driven oscillators

It is clear that routines such as Kaas-Peterson’s PATH [29] and Doedel’s AUTO [10] have had considerable success in tackling the problems surrounding the description of local solution paths and local bifurcation loci. The ideology of these routines is to define a set of functions which, while varying across the parameter/phase space of the particular dynamic system involved, are constant at the local solution points and/or the local bifurcation points. The locus of local solutions or local bifurcations in parameter/phase space represents the solutions of the resultant fixed point problems. The specifics of local solution and bifurcation path following shall be discussed later (chapter 4.0). Thus as a technique the homoclinic/heteroclinic bifurcation path following algorithm, henceforth named MTA ( Manifold Tangency Algorithm ), follows on ideologically from its progenitors PATH and AUTO. MTA express its historical context by the introduction of another allegorical set of functions. These functions will constitute the manifold tangency criteria. This is at present loosely defined as a function which has a fixed point at tangency of the manifolds concerned. The general problem of the tangency of two manifolds in R m phase space can be viewed as an extention of the problems in two dimensional phase space. Mostly this chapter will deal with this low dimensional analogue but the section 2.4.4 will draw out the extensions to higher dimensional systems.
Show more

240 Read more

Computational Performances of OFDM Using Different FFT Algorithms

Computational Performances of OFDM Using Different FFT Algorithms

Orthogonal Frequency Divisional Multiplexing (OFD-M) is a modulation scheme that allows digital data to be ef- ficiently and reliably transmitted over a radio channel, even in multi-path environments [1]. OFDM transmits data by using a large number of narrow bandwidth carri- ers. These carriers are regularly spaced in frequency, forming a block of spectrum. The frequency spacing and time synchronization of the carriers is chosen in such a way that the carriers are orthogonal, meaning that they do not cause interference to each other. In OFDM system, Discrete Fourier Transforms (DFT)/Fast Fourier Trans- forms (FFT) are used instead of modulators. The compu- tational complexity of implementing DFT/FFT/Very Fast Fourier Transform (VFFT) has been calculated in an OFDM system and compared their performance. At the current state of wireless communication techniques fac- ing ever increasing demand of high data rates, single carrier systems are offering limited solutions due to fre- quency selectivity of wideband channel resulting in se- vere complexities in equalizer design at the receiver end [2]. OFDM, as a multicarrier system, has become an ef- fective modulation technique for next generation wireless communication methods. Using FFT algorithms provides speed enhancements for data processing for OFDM sys- tems. This technique is being used for Digital Audio Broadcasting (DAB), Digital Video Broadcasting (DVB),
Show more

5 Read more

A survey of virtual topology design algorithms for wavelength routed optical networks

A survey of virtual topology design algorithms for wavelength routed optical networks

Virtual topology design over a WDM WAN is intended to combine the best features of optics and electronics. This type of architecture has been called \almost-all-optical" because trac is carried from source to destination without electronic switching \as far as possible", but some electronic switching may be performed. The architecture uses clear channels between nodes, called lightpaths, so named because they traverse several physical links but information traveling on a lightpath is carried optically from end-to-end. Usually a lightpath is implemented by choosing a path of physical links and reserving a particular wavelength on each of these links for the lightpath. This is known as the wavelength continuity constraint, indicating that a lightpath consists of a single wavelength over a sequence of physical links. This constraint can be relaxed by assuming the availability of wavelength converters at intermediate nodes. However, this involves not only expensive equipment but further complications relating to the tuning delay of converters and the issue of converter placement, and in this survey we treat the wavelength continuity constraint as part of the problem, for the most part. Because of limitations on the number of wavelengths that can be used, and hardware constraints at the network nodes, it is not possible to set up a clear channel between every pair of source and destination nodes. The particular set of lightpaths we decide to establish on a physical network consists the virtual (otherwise called the logical) topology.
Show more

35 Read more

2D Mesh Topology for Routing Algorithms in NoC Based on VBR and CBR

2D Mesh Topology for Routing Algorithms in NoC Based on VBR and CBR

structured and scalable solution to address communication problems in SoC. On chip interconnection network provides advantages over dedicated wiring and buses, i.e.low-latency, low-power consumption and scalability. The mesh topology has gained more designers due to its simplicity. However, source routing has one serious drawback of overhead for storing the path information in header of every packet. This disadvantage becomes as the size of the network grows. In this project we proposed a technique, called Junction Based Routing (JBR), to remove this limitation. In the proposed technique, path information for only a few hops is stored in the packet header. In this project, we have design 2D Mesh topology for NoC by using XY, OE and JBR (OE) algorithm on the basis of CBR and VBR. The parameter design parameters viz. latency, total network power and throughput are compared on the basis of CBR and VBR. It is observed that latency and throughput is improved in case of VBR as compared to CBR and Total Network Power is reduced for VBR as compared to CBR.
Show more

6 Read more

Resampling Algorithms for Particle Filters: A Computational Complexity Perspective

Resampling Algorithms for Particle Filters: A Computational Complexity Perspective

In our analysis, we considered the memory requirement not only for resampling, but also for the complete PF. The mem- ory size of the weights and the memory access during weight computation do not depend on the resampling algorithm. We consider particle allocation without indexes and with in- dex addressing for the SR algorithm, and with arranged in- dexing for RSR, PR2, PR3, and OPR. For both particle allo- cation methods, the SR algorithm has to use two memories for storing particles. In Table 3, we can see the memory ca- pacity for the RSR, PR2, and PR3 algorithms. The di ff erence among these methods is only in the size of the index memory. For the RSR algorithm which uses particle allocation with ar- ranged indexes, the index memory has a size of 2M, where M words are used for storing the addresses of the particles that are replicated or discarded. The other M words represent the replication factors.
Show more

11 Read more

Computational intelligence algorithms for risk adjusted trading strategies

Computational intelligence algorithms for risk adjusted trading strategies

The dataset presently employed is the daily noon New York buying rates for the US Dollar against the Japanese Yen exchange rate from the H10 Federal Statistical Release. The 5292 observations cover the period from 3/1/1985 to 2/1/2007. In addition to the price series, a normalized series is also provided as input to the algorithm. The normalized series is constructed by dividing each observation with the 250-day moving average [15]. Each input pattern contains the current price and the normalized price, while the algorithm can access past prices using the non-terminal node lag. The maximum lag that the algorithm is allowed to consider is 250. The first 3014 patterns were assigned to the training set, the next 502 patterns are assigned to the validation set, while the last 1508 patterns comprised the test set. The inclusion of a validation set was used to alleviate the problem of overfitting. The fitness of an individual on the validation set was only used during the assignment of the best individual identified during the execution. For both GMA and GP, a rule was assigned as the best identified so far if it was at least as good as the current best on both the training and the validation set, and it improved on the performance on at least one dataset (Pareto domination).
Show more

8 Read more

Comparative Analysis of Scheduling Algorithms in Computational Grid Environment

Comparative Analysis of Scheduling Algorithms in Computational Grid Environment

This paper tracks the development of Grid Computing since its inception in the late 1990s to its dominance in today’s world of Distributed Computing and Information Technology. It focuses on the recent developments that spurred our interest to take up this field of research with emphasis on the algorithms we are researching for job scheduling and load balancing in the Grid Environment. The entire structure of the Grid Environment is dynamic and hybrid by nature, changing with the availability and the capability of resources or hosts that perform user tasks and the Quality of Service requirements of the tasks themselves. This makes the problem of developing an optimal task-to- resource schedule that ensures proper load balancing and also produces the minimum overall makespan (time to complete scheduled tasks) an NP-Hard Problem. Our attempt in this research is not to find the optimal solution for this problem, but to analyze and test various algorithms that produce acceptable performance in the commonly occurring practical scenarios.
Show more

7 Read more

Application of various 'response surface' based algorithms in optimization of air manifolds for batch boilers

Application of various 'response surface' based algorithms in optimization of air manifolds for batch boilers

The design of the external part of the air distribution system applied in biomass-fired batch boilers may be considered equally important to the location, number and shape of the air inlets in the combustion chamber. Adequate shape of the air manifold provides more homogenous conditions in the combustion area and may reduce the required power of the air fan. Computational fluid dynamics is powerful tool that is widely used in studies of the air distribution systems. To achieve the best possible results, however, the classic (parametric) variant analysis needs to be supported by an optimization method. The preliminary investigation devoted to defining the key details of the geometry provided a set of data required to determine the efficient objective function or functions. Based on the successful results of the optimization performed within the study described in the paper, in case of the air manifolds for boilers, it is recommended to use the response surface-based optimization for the improvement of the air flow characteristics. The results of the optimization are strongly linked with the method of the DOE performance and the algorithm applied to create the response surface. Due to the above, it is advised to analyse the dedicated quality indicators before commencing the optimization process.
Show more

10 Read more

A Proficient Performance Scrutiny of King Mesh Topology based on the Routing Algorithms

A Proficient Performance Scrutiny of King Mesh Topology based on the Routing Algorithms

Fig. 3. 4x4 King topology (a) King Mesh (b) King Torus Figure 3 reveals that the complexity of the torus network is high compared to that of the mesh network. The processing time of the parallel applications are reduced in King Mesh topological structure. In Dimension Ordered Routing (DOR) algorithm or XY-routing algorithm based topology, packet first routed to correct position in higher dimension before attempting to route in next dimension. For example in a 2D mesh, route first in X dimension, then route in Y dimension. Thus the number of hops in between the source and destination get increased that increases the area and power utilization. Hence, DOR algorithm is not widely used for practical application. The hop count of the network is reduced by taking path selection decision according to the path weight of the network. This type of path selection is known as Weight based Path Selection routing technique (WPS) and it does not consider about the traffic condition of the network. Switching process helps to transfer the data from one node to another node. Normally networks use either circuit switching or packet switching to communicate the information with the other networks. Because of reduced latency the packet switching enhance the performance of the network architecture.
Show more

6 Read more

Replication-based inference algorithms for hard computational problems

Replication-based inference algorithms for hard computational problems

As time proceeds, the lowest energy walker corresponds to the lowest-temperature replica. After convergence or when a certain number of iterations have been done, results for all temperatures can be obtained. PT has a very high performance for the BIP capacity problem and can achieve very high storage capacities. The disadvantage comes from the fact that PT uses much more information than is needed to solve the BIP capacity problem and is therefore computationally expensive. Figure 2 shows the performance of the rOnMP compared to PT for a system size K = 21. The graph shows, once again, the same indicator function used for rOnMP ρ = 1 − χ , where χ is given by Eq. (2). The energy function used in the actual PT simulation was the energy given by Eq. (3). We see that already with n = 10 000 replica rOnMP has a better performance than PT, which was run up to the point when there was no extra improvement. We observed that by increasing the number of replica, we can reach better performances although the improvement in the performance becomes more modest for higher values; studies with a large number of replica n ∼ 10 5 seem to indicate that the critical capacity can indeed be
Show more

10 Read more

Simulation of Topology Control Algorithms in Wireless Sensor Networks Using Cellular Automata

Simulation of Topology Control Algorithms in Wireless Sensor Networks Using Cellular Automata

In our work, we have implemented two basic Topol- ogy Control Algorithms (TCA) (together with several variations of them), namely TCA-1 and TCA-2, and have used cellular automata for experimentally studying their performance. All topology control algorithms are based on the selection of an appropriate subset of sensor nodes that must remain active. In TCA-1, the decision regard- ing the node state (active or idle) is made by the nodes themselves (i.e., according to the state of the nodes in their neighbourhood), while in TCA-2, this decision is made in terms of predefined categories in which nodes have been classified (i.e., nodes in one of these catego- ries remain in their current states ignoring the state of the nodes in their neighbourhoods). The cellular automaton used for TCA-1 has been implemented in previous works using the Matlab [18] as well as the Java [19] program- ming environment. Furthermore, in this work, we have developed cellular automata for TCA-1 and TCA-2 using the Python programming environment.
Show more

13 Read more

Bayesian online algorithms for learning in discrete Hidden Markov Models

Bayesian online algorithms for learning in discrete Hidden Markov Models

1. Introduction. The unifying perspective of the Bayesian approach to machine learning allows the construction of efficient algorithms and sheds light on the char- acteristics they should have in order to attain such efficiency. In this paper we construct and characterize the performance of mean field online algorithms for dis- crete Hidden Markov Models (HMM) [5, 9] derived from approximations to a fully Bayesian algorithm.

10 Read more

Adding the reliability on tree based topology construction algorithms for wireless sensor networks

Adding the reliability on tree based topology construction algorithms for wireless sensor networks

In this section, we describe some prominent CDS based approaches which are later used to evaluate reliability of CDS based techniques. While one way of TC is by controlling the transmission range of nodes, backbone based solutions exercise TC by turning off unneces- sary nodes while preserving network connectivity and communication coverage. In [2,11] distributed algorithms for constructing CDSs in unit disk graphs (UDGs) were first proposed. These algorithms consist of two phases to form a CDS. First they form a spanning tree and use it to find maximal independent sets (MIS), in which all nodes are colored black. In second phase, some new blue colored nodes are added to connect the black nodes to form a CDS. Likewise Yuanyuan et al. in [14] proposed Energy Efficient CDS (EECDS) algorithm which follows a two phase TC scheme in order to form a CDS based coordinated reconstruction mechanism to prolong network lifetime and balance energy consumption. Similarly Wu et al. in [13] proposed a two phase TC scheme that uses marking and pruning rules for exchanging neighbors list among a set of nodes. In CDS Rule K [13] a node remains marked as long as there is at least a pair of unconnected nodes in its neighbors; it is unmarked when it finds that all its neighbors are covered with high priority. All the above studies focus on increasing the network lifetime by forming a reduced topology but, they do not analyze the impact of a reduced topology on network reliability.
Show more

17 Read more

Quantum  algorithms  for  computing  general  discrete  logarithms   and  orders  with  tradeoffs

Quantum algorithms for computing general discrete logarithms and orders with tradeoffs

As the cost of estimating n for a given problem instance is non-negligible, we seek to minimize the number of problem instances considered, whilst capturing the problems that underpin most currently deployed asymmetric cryptologic schemes. To this end, for m ∈ {128, 256, 384, 512, 1024, . . . , 8192}, we pick a single combination of d and r using the method described in section 7.3, and estimate n for a subset of tradeoff factors s ∈ {1, 2, . . . , 8, 10, 20, . . . , 50, 80}, such that the bounded error in the regions included in the histogram is negligible.

52 Read more

Computational Intelligence for Wireless Sensor Networks: Applications and Clustering Algorithms

Computational Intelligence for Wireless Sensor Networks: Applications and Clustering Algorithms

Particle Swarm Optimization (PSO) was developed in 1995 by James Kennedy, and Russell Eberhart [32]. PSO is a robust stochastic nonlinear- optimization technique based on movement and intelligence of swarms. It is inspired from social behavior of bird or fish, where a group of birds randomly search for food in an area by following the nearest bird to the food. It combines local search methods with global search methods depending on social interaction between particles to locate the best achieved position so far. PSO and GA are very similar [33]. Both are population based stochastic optimization that starts with a group of a randomly generated population. They have fitness values to evaluate their population, and update the population and search for the optimum with random techniques. However, PSO differs from GA in that there is no crossover and mutation. PSO particles do not die. They update themselves with the internal velocity. Finally, the information sharing mechanism in PSO is significantly different. Each particle is treated as a point in a multi-dimensional space, and modifies its position influenced by two components: the cognitive component and the social component resulting from neighbor communication. The basic PSO equations are shown in Equations 1 and 2. Several enhancements on the standard PSO equation are listed in [34].
Show more

8 Read more

Computational Intelligence
Algorithms for Optimisation of
Wireless Sensor Networks

Computational Intelligence Algorithms for Optimisation of Wireless Sensor Networks

Recent progress in wireless communications and micro-electronics have contributed to the development of sensor nodes that are agile, autonomous, self- aware and self-configurable. These sensor nodes are densely deployed through-out a spatial region in order to sense particular event or abnormal environmental conditions such as moisture, motion, heat, smoke, pressure etc in the form of data Oladimeji et al. [2016]. These sensors, when in large numbers, can be networked and de- ployed in remote and hostile environments enabling sustained wireless sensor net- work (WSN) connectivity. Hitherto WSNs have been used in many military and civil applications, for example, in target field imaging, event detection, weather monitoring, tactile and security observation scenarios Naeimi et al. [2012]. Never- theless, sensor node distribution and network longevity are constrained by energy supply and bandwidth requirements. These noted constraints mixed with the common deployment of large numbers of sensor nodes must be considered when a WSN network topology is to be deployed. The design of energy efficient scheme is a major challenge especially in the domain of routing, which is one of the key functions of the WSNs Chakraborty et al. [2011]. Therefore, inventive techniques which reduce or eliminate energy inadequacies that would normally shorten the lifetime of the network are necessary. This chapter present a method which bal- ances energy consumption among sensor nodes to prolong WSN lifetime. Energy resourcefulness is uniquely obtained using two described mechanisms; firstly, clus- ter head (CH) selection using a generic algorithm (GA) is employed that ensures appropriately distributed nodes with higher energies will be selected as CHs. Sec- ondly, a Boltzmann inspired selection mechanism was utilised to select nodes to send into sleep mode without causing an adverse effect on the coverage.
Show more

166 Read more

Show all 10000 documents...