Top PDF Scalable Techniques for the Analysis of Large-scale Materials Data

Scalable Techniques for the Analysis of Large-scale Materials Data

Scalable Techniques for the Analysis of Large-scale Materials Data

DFS algorithm is linear (O(n)), resulting in a highly scalable algorithm. 2 As the outcome of this algorithm, each vertex of the graph has an assigned index of the corresponding component. Fig. 4.3 illustrates a simple example with the three distinct connected components (precipitates) circled. Figure 4.3 (a) Illustration of a graph built using all atoms in a region of inter- est of APT dataset with black atoms(label=1) representing the precipitate. (b) Three connected components of precipitate identified in the graph

101 Read more

On Scalable Data Mining Techniques for Earth Science

On Scalable Data Mining Techniques for Earth Science

One of the observations made in earth data science is the massive increase of data volume (e.g, higher resolution measurements) and dimensionality (e.g. hyper-spectral bands). Traditional data mining tools (Matlab, R, etc.) are becoming redundant in the analysis of these datasets, as they are unable to process or even load the data. Parallel and scalable techniques, though, bear the potential to overcome these limitations. In this contribution we therefore evaluate said techniques in a High Performance Computing (HPC) environment on the basis of two earth science case studies: (a) Density-based Spatial Clustering of Applications with Noise (DBSCAN) for automated outlier detection and noise reduction in a 3D point cloud and (b) land cover type classification using multi-class Support Vector Machines (SVMs) in multi- spectral satellite images. The paper compares implementations of the algorithms in traditional data mining tools with HPC realizations and ’big data’ technology stacks. Our analysis reveals that a wide variety of them are not yet suited to deal with the coming challenges of data mining tasks in earth sciences.
Show more

10 Read more

Techniques for automatic large scale change analysis of temporal multispectral imagery

Techniques for automatic large scale change analysis of temporal multispectral imagery

The volume of the space is calculated for increasing dimensions until it reaches zero. The volume decreases as dimensionality increases; the simple example of the volume of a plane in three dimensions serves as an illustration to this possibly confusing statement. Once the third dimension, of length zero, is added to the multiplication, the total volume goes to zero as well. The monotonically decreasing volume as a function of dimension represents an estimate of the number of end-members needed to describe the scene, or loosely the number of end-member materials present in the scene. End-members are considered to be the vertices of the the n-parallelotope that encloses the data. Physically, an ideal end-member is a pure (un-mixed) pixel containing just one material, and one end-member would exist for each material in the image. Realistically, especially for cluttered scenes, true pure end- members do not exist for all scene containing materials. Change detection using the estimate of the volume is performed by finding changes in the peak magnitude of the volume estimate for a given image pair. Using this metric, a per tile change score can be calculated. Both methods, the Δ𝑃𝑃𝑃𝑃 and the Gram matrix volume estimate, described here could have application in the final large scale change analysis scheme outlined in this work.
Show more

267 Read more

Supporting Queries and Analyses of Large Scale Social Media Data with Customizable and Scalable Indexing Techniques over NoSQL databases

Supporting Queries and Analyses of Large Scale Social Media Data with Customizable and Scalable Indexing Techniques over NoSQL databases

- NoSQL databases: scalable storage and efficient random access for their data model - Mapping abstract index structure to underlying data model - General applicability and inherited sca[r]

30 Read more

Supporting Queries and Analyses of Large Scale Social Media Data with Customizable and Scalable Indexing Techniques over NoSQL databases

Supporting Queries and Analyses of Large Scale Social Media Data with Customizable and Scalable Indexing Techniques over NoSQL databases

- Customizable index structures for best query evaluation performance - Scalable indexing performance achieved through proper mapping to NoSQL databases - IndexedHBase: reliable index st[r]

44 Read more

Information and Knowledge Assisted Analysis and Visualization of Large-Scale Data

Information and Knowledge Assisted Analysis and Visualization of Large-Scale Data

Intelligent Feature Tracking. Conventional methods for feature extraction and tracking require either an analytical description of the feature of interest or tedious manual inter- vention throughout the feature extraction and tracking process. Tzeng and Ma [15] presented an intelligent feature extraction and tracking algorithm for visualizing large-scale 4D flow simulation data. They showed that it is possible for a visualiza- tion system to learn to extract and track features in complex 4D flow field according to their visual properties, location, shape, and size. Intelligent feature extraction and tracking is performed in the data space by utilizing machine learning techniques on high-dimensional data. In this case, the scientists do not need to specify explicitly the relationship between these different dimensions. The feature extraction function is revised through an iterative training process following the temporally changing properties of the tracked features. An example is shown in Figure 4. They also designed an intuitive user interface with multiple coordinated views to facilitate interactive, intelligent feature extraction and tracking. Using a painting metaphor, the scientist specifies a feature of interest by marking directly on the 2D or 3D images of the data through this interface. Such an intelligent system leads to a greatly simplified and intuitive visualization interface.
Show more

8 Read more

DRAT: An Unobtrusive, Scalable Approach to Large Scale Software License Analysis

DRAT: An Unobtrusive, Scalable Approach to Large Scale Software License Analysis

We ran RAT on 24-core, 48 GB RAM Linux machine at the National Aeronautics and Space Administration (NASA)’s Jet Propulsion Laboratory (JPL) to produce a license evaluation of the XDATA Git repository and to provide recommendations on how the open source software products can be combined to adhere to the XDATA open source policy encouraging permissive licenses. Against our expectations, however, RAT failed to successfully and quickly audit XDATA’s large Git repository. Moreover, RAT provided no incremental output, resulting in solely a final report when a task was completed. RAT’s crawler did not automatically discern between binary file types and another file types. It seemed that RAT performed better by collecting similar sets of files together (e.g., all Javascript, all C++, all Java) and then running RAT jobs individually based on file types on smaller increments of files (e.g., 100 Java files at a time, etc). The lessons learned navigating these issues have motivated to create “DRAT”, which stands for "Distributed Release Audit Tool". DRAT directly overcomes RAT's limitations and brings code auditing and open source license analysis into the realm of Big Data using scalable open source Apache technologies. DRAT is already being applied and transitioned into the government agencies. DRAT currently exists at Github under the ALv2 [12]. In this paper, we will describe the DRAT in detail. The remainder of this paper is structured as follows. Section 2 documents DRAT’s architecture and its workflow. Section 3 presents the results of running DRAT on several national repositories and on its performance. We present related work in section 4 and draw conclusions and provide future work in final section.
Show more

5 Read more

ROLE OF KNOWLEDGE MANAGEMENT TECHNIQUES IN DATA MINING FOR LARGE SCALE DATA IN BANKS

ROLE OF KNOWLEDGE MANAGEMENT TECHNIQUES IN DATA MINING FOR LARGE SCALE DATA IN BANKS

techniques, such as completing goals within the given time period for banks and other related systems, how to combine the required activity in the single cluster known as clustering so that they can be easily performed, providing segmented products for better customer selection, rapid fraud detection, model analysis Customer acquisition in the time for appropriate retention and reporting, detection of emerging trends to take better care in a current competitive market that adds much more value to existing products and services and launches new product and service packages. Data mining has a very large application domain in almost all fields where data is generated, so data mining is considered one of the most useful tools in the database and information systems and one of the most effective multidisciplinary developments in the field of the system banking.
Show more

9 Read more

SHIP: Scalable Hierarchical Power Control for Large-Scale Data Centers

SHIP: Scalable Hierarchical Power Control for Large-Scale Data Centers

pp(k + 1) = pp(k) + [1 . . . 1] ∆br(k). (6) In a real system that has different server configurations or is running a different workload, the actual value of gi may become different than 1. As a result, the closed-loop sys- tem may behave differently. A fundamental benefit of the control-theoretic approach is that it gives us theoretical con- fidence for system stability, even when the estimated system model (6) may change for different workloads. In MPC, we say that a system is stable if the total power pp(k) converges to the desired set point Ps, that is, limk→∞ pp (k) = Ps. In an extended version of this paper [13], we prove that a sys- tem controlled by the controller designed with the assump- tion gi = 1 can remain stable as long as the actual system gain 0 < gi < 14.8. This range is established using stability analysis of the closed-loop system by considering the model variations. To handle systems with an actual gi that is out- side the established stability range, an online model estima- tor implemented in our previous work [14] can be adopted to dynamically correct the system model based on the real power measurements, such that the system stability can be guaranteed despite significant model variations.
Show more

10 Read more

Scalable system software for high performance large-scale applications

Scalable system software for high performance large-scale applications

• Dynamic mapping: This approach is the most flexible: virtual-to-physical map- pings can change at runtime (dynamic memory mapping). TLB entries store a subset of the current virtual-to-physical mappings and can be replaced at runtime. With dynamic memory mappings, the virtual-to-physical translations of a process can be modified at runtime by the OS. This approach allows memory-on-demand (the virtual-to-physical mapping is determined at the moment a process first addresses a new page), copy-on-write (the virtual-to-physical mapping of a page is modified when the first write to a page shared by parent and child process occurs), swapping (the physical address of a page temporary swapped on disk is determined after the page is loaded back to memory), virtualization, etc. All these techniques are commonly employed in general purpose operating systems, such as Linux: they usually provide considerable advantages for desktop or data centers systems but also introduce time variability and larger overheads. Several LWKs (e.g., CNK) have, instead, taken the approach of statically mapping page table entries to TLB entries at the beginning of the execution of an application and avoid TLB misses altogether (fixed TLBs). Replaceable TLBs are less restrictive than fixed TLBs in that, even if the virtual-to-physical translation mapping of an application does not change throughout the execution, the mapping may contain more entries than provided by the hardware, hence TLB misses occur to allow more complex mapping layouts than the one offered by fixed TLB mappings. With static memory mapping (fixed TLBs and replaceable TLBs) pages cannot be re- mapped at runtime, hence, the dynamic techniques described above are not easy to implement. On the other hand, the OS design is simpler and there is minimal runtime overhead.
Show more

214 Read more

On large-scale probabilistic and statistical data analysis

On large-scale probabilistic and statistical data analysis

In this manuscript we develop and apply modern algorithmic data reduction techniques to tackle scalability issues and enable statistical data analysis of massive data sets. Our algorithms follow a general scheme, where a reduction technique is applied to the large-scale data to obtain a small summary of sublinear size to which a classical algorithm is applied. The techniques for obtaining these summaries depend on the problem that we want to solve. The size of the summaries is usually parametrized by an approximation parameter, expressing the trade-off between efficiency and accuracy. In some cases the data can be reduced to a size that has no or only negligible dependency on n, the initial number of data items. However, for other problems it turns out that sublinear summaries do not exist in the worst case. In such situations, we exploit statistical or geometric relaxations to obtain useful sublinear summaries under certain mildness assumptions. We present, in particular, the data reduction methods called coresets and subspace embeddings, and several algorithmic techniques to construct these via random projections and sampling.
Show more

117 Read more

Similarity Based Large Scale Malware Analysis: Techniques and Implications

Similarity Based Large Scale Malware Analysis: Techniques and Implications

ML based desktop malware detection systems using static features dates back 2001. Schultz et al. [34] were among the first to suggest using data mining methods for detection of new malicious executables. They explored various machine learning algorithms with different static features for malware classification and claimed that the ML detection rate was twice as compared to signature based method. Kolter [35] later tried n-gram features with different classifiers and concluded that the boosted decision tree gave the best classification results. In [36], Tian et al. attempted to classify Trojans with the frequency of function length, which was measured by the number of bytes in the code. Siddiqui [37] used variable length instruction sequence along with machine learning for detecting worms in the wild. Santos [38] designed a semi-supervised learning approach which was able to learn from labeled and unlabeled data and provides a solution with respect to the intrinsic structure displayed by both labeled and unlabeled instances. Nataraj [39] proposed a method for visualizing and classifying malware using image processing techniques, which visualize malware binaries as gray-scale images. the authors compared binary texture based analysis (based on image processing techniques) with that of dynamic analysis.
Show more

119 Read more

Design and Analysis of Large Data Processing Techniques

Design and Analysis of Large Data Processing Techniques

Sensor Data Analytics - When the data from the different sources is collected then it can be multi-media files like video, photo and audio, from which important conclusions for the business can be drawn. For example, on the roads, the data from the car’s black boxes is collected if the vehicles met some accidents. There are huge text files including endless logs from IT systems, notes and e-mails which contain indicators that businesses are keen on. One more thing is very important to understand that the vast number of sensors built into smartphones, vehicles, buildings, robot systems, appliances, smart grids and whatever devices collecting data in a diversity which was unbelievable in the past. These sensors represent the basis for the ever evolving and frequently quoted Internet of things. All this data can be analyzed by the MapReduce. To address this issue, MapReduce has been used for large scale information extraction. [16,17]on the other hand, due to the rapid data volume increasing in recent years at customer sites, some data such as web logs, call details, sensor data and RFID data can not be managed by Teradata partially because of its expensiveness to load large volumes of data to a RDBMS[18]. In this section, the parameters elaborated on MapReduce are applicable to Bulk Synchronous Processing as this technique is implemented via MapReduce
Show more

5 Read more

A Novel Approach for Processing Large-Scale Data by Using Peer-To-Peer Techniques

A Novel Approach for Processing Large-Scale Data by Using Peer-To-Peer Techniques

The Companies used sharing data where they need to contribute or they share common interest. As per increasing business trends and maximum used of cloud computing, the new system evolved in new stage of growth towards cloud enabled system. In this system based on peer to peer system develop data sharing service in shared network. This system is the combination of cloud computing, databases and peer to peer based technologies. This system gives the efficiency as pay as you go manner. This paper used benchmarking method for analysis and evolution by comparing this method with Hadoop DB which is Large Scale data processing platform. This system having security by providing private key and admin authorized to provide access to other user. Using Cloud Computing they allowed to user to store their data into cloud and access when required.
Show more

5 Read more

Analysis of large-scale metagenomic data

Analysis of large-scale metagenomic data

1.2. DNA sequencing 3 fragments are sequenced in parallel, leading to the early designation "massively parallel sequencing". The output from NGS machines is called reads. Each read represents a short sec- tion of DNA; historically machines produced reads as short as 25-50 bases, but today’s current technologies can produce read lengths between 100-250 (some up to 400+ bases), depending on what technology is used. Several different NGS technologies exist which can be grouped into several categories. The most common technologies that are currently available in the market and often en- countered are based on techniques called sequencing by synthesis (e.g. Illumina, 454 pyrosequencing, and Ion Torrent), sequencing by ligation (e.g. ABI SOLiD) and more recently single-molecule sequencing (e.g. Pacific Biosciences SMRT). A majority of all the sequencing data used in this thesis was produced using Illumina sequencers. They implement the sequence by synthesis approach to DNA sequencing which starts with sample preparation where the DNA is ex- tracted, purified, and fragmented into templates. These templates are attached to a solid surface on a flow cell where they are amplified to produce many copies of each template in small clusters on the surface. Each cluster thus consists of many copies of short identical DNA strands sitting close together. The flow cell of an Illumina HiSeq 2500 can have between 610,000-678,000 such clusters per square millimeter (Illumina, 2013), which is one of the reasons for their immense throughput. The actual sequence of nucleotides in the strand clusters is determined by flowing a solution of free nucleotides across the flow cell. The nucleotides attach to their complementary base on the strands, and emit a small flash of light when they bind to the next available position in the strands in the clusters. A high resolution camera detects the light emitted, coded with differ- ent colours for each of the four nucleotides, to produce a sequence of images. From these, the sequence of nucleotides on the strands in each cluster can be determined. This constitutes the base calling process. For interested readers a very good presentation of different NGS technologies is presented in (Metzker, 2010).
Show more

24 Read more

NLP driven large scale financial data analysis

NLP driven large scale financial data analysis

With the development of natural language processing (NLP) in both academia and indus- try, vast amount of useful apps and programs are becoming more and more poIrful. From restaurant recommendation to movie ratings, more and more customers are benefiting from the convenience that NLP techniques provide. Stock market, which is a critical source of revenues for many people and organizations, has been gaining attention from news agencies, individuals, and government associated institutions, and thus there are abundant corpora about it as well. From commercial giant institutions to individual traders, everybody on the stock market try to collect information, analyze them, and make wise decisions about selling out or buying in, maintaining a good portfolio to maximize their properties. However, since the stock market, which is influenced by many factors, is to a large extent unstable or even stochastic, it is relatively hard a hard task and there is currently no mature and satisfying solution.
Show more

46 Read more

A scalable analysis framework for large-scale RDF data

A scalable analysis framework for large-scale RDF data

Compared with the two issues above, load balancing presents a bigger challenge as the distribution of terms in the Semantic Web is highly skewed: there exist both popular (like predefined RDF and RDFS vocabulary) and unpopular terms (like identifiers for entities that only appear for a limited number of times). For a distributed system, like ours, any com- pression algorithm needs to be carefully engineered so that good network communication and computational load-balance are achieved. If terms are assigned using a simple hash dis- tribution algorithm, the continuous re-distribution of all the terms would undoubtedly lead to an overloaded network. Furthermore, popular terms would lead to load-balancing issues. For the sake of explanation, let us categorize terms into three groups: high-popularity terms that appear in a significant portion of the input triples, low-popularity terms that ap- pear less than a handful of times and average-popularity terms (which is also the largest portion of RDF data). The state-of-the-art MapReduce compression algorithm [116] effi- ciently processes high-popularity terms. The very first job in the algorithm is to sample and assign identifiers to popular terms, using an arbitrarily chosen threshold. These identifiers are then distributed to all nodes in the system, and used to encode terms locally at each node. This dramatically improves load balancing and speeds up computation. For the rest of the terms, the data is repartitioned, and identifiers are assigned. For low-popularity terms, this also works well, as there are not many redundant data transfers. For low-popularity terms, we can either retrieve their mappings (possibly for multiple nodes), or we can send the data to the node where it is going to be encoded. In either case, the number of messages will be limited. For medium-popularity terms, the situation is different: Assume a term that appears 10000 times, and we have 100 compute nodes. If all nodes would need to retrieve the mapping from a single node, we would need 200 messages. If we repartition the terms, we would need at least 10000 messages. One can easily see the situation reversed for a term that appears 100 times (i.e. partitioning data might be more efficient that retrieving mappings). Then the question is: how can we reconcile efficient encoding of popular and non-popular terms?
Show more

208 Read more

Distributed Data Analysis For Large-Scale Social Networks

Distributed Data Analysis For Large-Scale Social Networks

This paper addresses the issue of discovering k closest neighbors for moving question point (we call it k-NNMP). It is a vital issue in both portable registering exploration and genuine applications. The issue expects that the inquiry point is not static, as in k-closest neighbor issue, but rather differs its position after some time. In this paper, four distinct techniques are proposed for taking care of the issue. Talk about the parameters influencing the execution of the calculations is likewise introduced. A succession of examinations with both manufactured and genuine point information sets is concentrated on. In the tests, our calculations dependably outflank the current ones by getting 70% less plate pages. In a few settings, the sparing can be as much as one request of size.
Show more

5 Read more

SCALABLE AND ROBUST CLUSTERING AND VISUALIZATION FOR LARGE SCALE BIOINFORMATICS DATA

SCALABLE AND ROBUST CLUSTERING AND VISUALIZATION FOR LARGE SCALE BIOINFORMATICS DATA

single gene evolution (the coalescent) derived from a given sequence dataset (like for the 28S rRNA gene), GMYC uses a maximum likelihood approach to determine the transition point sequence changes representing speciation (Yule) events to those representing coalescent events (population divergence within the same species) [80]. A recently proposed alternative to the GMYC method is the Poisson Tree Process (PTP), which is computationally faster than the GMYC method while also achieving increased clustering accuracy [81]. The PTP estimates species clusters using a maximum-likelihood phylogenetic tree produced from the sequences as a guide (instead of the coalescent tree required for the GMYC method), and assumes that each nucleotide substitution has a fixed probability of being the basis for a speciation event. The PTP is able to give accurate species determinations regardless of the amount of sequence similarity between the species being compared. However the PTP still requires either multiple sequence alignment or a guide phylogenetic tree in order to cluster sequences, and therefore is computationally more costly than a clustering algorithm that uses pairwise sequence alignment. The methods used for phylogenetic tree creation have become more standardized compared to clustering techniques. The most commonly accepted methods are probabilistic approaches including maximum likelihood (RAxML) [45] and Bayesian [44] methods . Because both of these methods incorporate uncertainty phylogenetic tree construction, they are thought to provide phylogenies that are closely aligned with actual patterns of evolutionary history.
Show more

155 Read more

Scalable performance analysis of large-scale parallel applications on Cray XT systems with Scalasca

Scalable performance analysis of large-scale parallel applications on Cray XT systems with Scalasca

OpenMP features in addition to message passing are also sup- ported. Figure 1 shows the Scalasca workflow for instrumenta- tion, measurement, analysis and presentation. Before performance data can be collected, the target appli- cation must be instrumented and linked to the measurement li- brary. The instrumenter for this purpose is used as a prefix to the usual compile and link commands, offering a variety of man- ual and automatic instrumentation options. MPI operations are captured simply via re-linking, whereas a source preprocessor is used to instrument OpenMP parallel regions. Often compilers can be directed to automatically instrument the entry and ex- its of user-level source routines, or the PDToolkit source-code instrumenter can be used for more selective instrumentation of routines [4]. Finally, programmers can manually add custom instrumentation annotations into the source code for important regions via macros or pragmas which are ignored when instru- mentation is not activated.
Show more

10 Read more

Show all 10000 documents...