Abstract- The purpose of the paper is to predict the deflection of a cantilever beam using soft computationtechnique. The neural networking technique is adopted here for the prediction. The large deflection of the beam is considered in this study for the analysis. The variable length of the beam is considered for the training purpose of the neural network. The aim of the study is to predict the deflection of the beam for any intermediate length of the beam from the data observed from the experimental as well as finite element analysis. It is observed in this study that there is a very close agreement between the predicted data and analytical data.
d as a function of the lab momentum ( P Lab ) , mass number (A) and the number of particles per unit solid angle (Y). In all cases studied, we compared our seven discovered functions produced by GP technique with the corresponding experimental data and the excellent matching was so clear.
PPDM was introduced initially in the year 2000  in which many issues related to this active area of research were discussed. Researchers from then on have proposed varied models to address the issues related to PPDM. PPDM could be broadly classified into two categories, derived based on the privacy level provided to the data held by each party or data custodians as in referred here namely Secure Multiparty computation and partial information hiding .The first category or Secure Multiparty Computation provides a robust level of security where as the other category of partial information hiding provides lower levels of privacy and improved mining performances. Considerable amount of work has been carried out in the area of Secure Multiparty computation  . The data considered with each data custodian could be horizontally portioned  , vertically portioned   or even clustered using k-means algorithm . Research work utilizing these models have been closely studied prior to the development of the proposed model. The arena of PPDM using information hiding could be classified into data perturbation  , retention replacement   and k-anoymity . The proposed system considers secure multiparty computation to provide for more privacy and also considers the C5.0  algorithm to provide for better mining results when compared to its predecessors like ID3  and C4.5  decision tree algorithms.
In this wo rk o u r a i m to a ch i ev e a h ig h th ro ug h pu t co mp a ct. AES S -Bo x wi th mi n i mu m a rea co n su mp ti on . To improve architectures are proposed for implementation of S-Box and Inverse S-box needed in the Advanced Encryption Standard (AES). Unlike previous work which rely on look-up table to implement the Subbytes and Invsubbytes transformations of the AES algorithm the proposed design employs Combinational logic only for implementing Subbytes (S-Box) and InvsubBytes (Inverse S- Box). The resulting hardware requirements are presented for proposed design and compared by ROM- based and Pre-Computationtechnique and improve with this two technique a new technique is Galois field arithmetic.
Abstract: Deniable authentication protocols enable a sender to authenticate a message to a receiver such that the receiver is unable to prove the identity of the sender to a third party. In contrast to interactive schemes, non-interactive deniable authentication schemes improve communication efficiency. Currently, several non-interactive deniable authentication schemes have been proposed with provable security in the random oracle model. In this paper, we study the problem of constructing non-interactive deniable authentication scheme secure in the standard model without bilinear groups. An efficient non-interactive deniable authentication scheme is presented by combining the Diffie-Hellman key exchange protocol with authenticated encryption schemes. We prove the security of our scheme by sequences of games and show that the computational cost of our construction can be dramatically reduced by applying pre-computationtechnique.
Abstract. In this paper, a MultiStructure computationtechnique based on the Method of Moments (MSMoM) is presented. The technique permits the simultaneous analysis of different structures, such as printed antennas, with only one electromagnetic (EM) simulation. Its performance in terms of number of operations for the analysis of several structures is evaluated. The complexity of the new technique is considerably reduced by comparison to an equivalent direct MoM implementation leading to important time savings.
The numerical method described in previous chapters has been embodied in a computer program for two-dimensional problems allowing the use of arbitrary unstructured overlapping and moving meshes. In this chapter the accuracy and efficiency of the method is assessed on a number of test computations. The test cases selected are ones for which either well established numerical solu- tions exist, experimental data is available, or the solution may be obtained by another program that employs the same or similar methodology 1 . On a number of test cases the present method was proven to produce correct results on single grids before it was extended to overlapping grids. Therefore, the attention here is focused mainly on the assessment of the method performance when overlapping grids are used. However, wherever it was possible, the computations were performed also on single grids. These results served mainly as a reference for testing of the over- lapping grid method, but also could be used for further assessment of the single grid results. The great flexibility of the overlapping grid technique in the computation of flows around moving bodies is demonstrated on three examples which involve complex motion of bodies relative to each other.
An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFT COMM technique. Based on the treatment of the combinational-type hypotheses
In past articles different methods of cancer detection are used. Technique using a new hybrid approach based on cuckoo algorithm and support vector machine classifier advantages of which are increasing computation performance and accuracy of SVM parameters that for improving it we have applied combination of fuzzy neural network and SVM. In another article namely the combination, decision tree
MAFIA proposes an adaptive interval size to partition the dimension depending on the distribution of data in the dimension. Using a histogram constructed by one pass of the data initially, MAFIA determines the minimum number of bins for a dimension. Contiguous bins with similar histogram values are combined to form larger bins. The bins and cells that have low density of data will be pruned limiting the eligible candidate dense units, thereby reducing the computation. Since the boundaries of the bins will also not be rigid, it delineates cluster boundaries more accurately in each dimension. It improves the quality of the clustering results.
Nonlinear buckling analysis is usually the more accurate approach and is therefore recommended for design or evaluation of actual structures. This technique employs a nonlinear static analysis with gradually increasing loads to seek the load level at which your structure becomes unstable, as depicted in Figure I.
aspect of high speed maximum a posteriori (MAP) probability decoders which are intrinsic building- blocks of parallel turbo decoders. For the logarithmic-Bahl–Cocke–Jelinek–Raviv (LBCJR) algorithm used in MAP decoders, we have presented an ungrouped backward recursion technique for the computation of backward state metrics. Unlike the conventional decoder architectures, MAP decoder based on this technique can be extensively pipelined and retimed to achieve higher clock frequency. Additionally, the state metric normalization technique employed in the design of an add-compare-select-unit (ACSU) has reduced critical path delay of our decoder architecture. We have designed and implemented turbo decoders with 8 and 32 parallel MAP decoders in 90nmCMOStechnology. VLSI implementation of an 8 parallel turbo-decoder has achieved a maximum throughput of 439 Mbps with 0.11 n J/bit/iteration energy-efficiency. Similarly, 32parallel turbo-decoder has achieved a maximum throughput of 1.138 Gbps with an energy-efficiency of 0.079 n J/bit/iteration. These high-throughput decoders meet peak data-rates of 3GPP-LTEandLTE- Advanced standards.
Searching pairs of similar data records is an operation required for many data mining techniques like clustering and collaborative filtering. With the emergence of the Web, scale of the data has increased to several millions or billions of records. Business and scientific applications like search engines, digital libraries, and systems biology often deal with massive datasets in a high dimensional space. The overarching goal of this dissertation is to enable fast and incremental similarity search over large high dimensional datasets through improved indexing, systematic heuristic optimizations, and scalable parallelization. In Task 1, we design a sequential algorithm for All Pairs Similarity Search (AP SS) that involves finding all pairs of records having similarity above a specified threshold. Our proposed fast matching technique speeds-up AP SS computation by using novel tighter bounds for similarity computation and indexing data structure. It offers the fastest solution known to date with up to 6X speed-up over the state-of-the-art existing AP SS algorithm. In Task 2, we address the incremental formulation of the AP SS problem, where AP SS is performed multiple times over a given dataset while varying the similarity thresh- old. Our goal is to avoid redundant computations across multiple invocations of AP SS by storing computation history during each AP SS. Depending on the similarity threshold vari- ation, our proposed history binning and index splitting techniques achieve speed-ups from 2X to over 10 5 X over the state-of-the-art AP SS algorithm. To the best of our knowledge, this is the first work that addresses this problem.
This document presents a new developed Matlab Simulink model to compute traffic load for real time traffic signal control. Signal processing Blockset and video and image processing Blockset have been used for traffic load computation. The approach used is corner detection operation, wherein, corners are extracted to count the number of vehicles. This block finds the location of the corners, the number of corners, and the corner metric values. The developed model computes the results with greater degrees of accuracy and is capable of being used to set the green signal duration so as to release the traffic dynamically on traffic junctions.
this computation is a memory-bound one and utilizes the peak bandwidth of GPU memory (i.e., 250 GB/s), the attainable performance is estimated to be 102.5 GFlops. Since the performance of 215.3 GFlops is achieved by using the auto-tuning, which is more than the estimated value, this auto-tuned stencil function is likely to be well optimized. Although the computation invoked with (64, 2) threads and 16-element z marching, which is used for invoking non-tuned GPU kernels generated by the framework, achieves higher performance for almost all mesh sizes than other computations, that for a 8 × 512 × 512 mesh remains at 46.4 GFlops since the number of threads in the x direction is far from the mesh size in that direction. The proposed auto-tuning mechanism overcomes this performance degradation and achieves 102.6 GFlops.
The history binning technique stores information about all pairs evaluated in the current invocation of IAP SS. Pairs are grouped based on their similarity scores and stored in binary files. This information is used in the next invocation of IAP SS to avoid re-computation of known similarity scores. Grouping pairs enables our algorithm to read only the necessary parts of the computation history. The I/O for history binning is performed in parallel to the similarity score computation, which reduces the overhead in end-to- end execution time.
In 1969 Cordell Green presented his seminal description of planning as theorem proving with the situation calculus. The most pleasing feature of Green's account was the negligible gap between high-level logical specification and practical implementation. This paper attempts to reinstate the ideal of planning via theorem proving in a modern guise. In particular, I will show that if we adopt the event calculus as our logical formalism and employ abductive logic programming as our theorem proving technique, then the computation performed mirrors closely that of a hand-coded partial order planning algorithm. Furthermore, if we extend the event calculus in a natural way to accommodate compound actions, then using exactly the same abductive theorem prover we obtain a hierarchical planner. All this is a striking vindication of Kowalski's slogan “Algorithm = Logic + Control”.
Another issue concerns the modeling of the bolt itself. In practice, some of the parame- ters of the bolted assembly are not precisely determined, as the preload of the nut or the friction coeﬃcient. Such types of problems have been analyzed by dedicated techniques for the bolt computation [37, 38], including multiresolution . The use of the proposed non-intrusive techniques allows to extend these studies to the case of 2D–3D structural analyses in a straightforward manner. In addition, the use of model reduction techniques for the global model itself, as proposed in  should lead to a very signiﬁcant reduction of the computational time.