data computation

Top PDF data computation:

Semantic Trajectories: Mobility Data Computation and Annotation

Semantic Trajectories: Mobility Data Computation and Annotation

Enrichment with geographic points: Complementary research focuses on iden- tifying meaningful points of interests (POI) related to trajectories, based on clustering [Zhou et al. 2007][Palma et al. 2008] or reinforcement inference techniques (e.g., HITS and PageRank) [Cao et al. 2010][Zheng et al. 2009]. In addition, [Xie et al. 2009] de- signs a semantic spatio-temporal join method to infer activities from trajectories, based on a small set of pre-defined geographic hotspots. [Li et al. 2010] design algorithm for mining periodic behaviors in trajectories, focusing on semantic points like home/office. However, most of these studies consider only environments with sparse POIs, where identifying the meaningful POI for each trajectory part is trivial. In our approach, we consider trajectories in a city center with very dense POIs. We design an HMM-based POI inference for identifying the latent stop behaviors hidden in the raw mobility data. In summary, we observe that these semantic enrichment works focus on specific sit- uations and provide algorithms that are applicable to compute and annotate only cer- tain kinds (or parts) of trajectories [Alvares et al. 2007][Yin et al. 2004][Palma et al. 2008][Xie et al. 2009][Newson and Krumm 2009], e.g., map-matching for vehicle moves or extracting important POIs for hotspots. None of them considers the analysis of com- plete trajectories that contain heterogeneous semantics, like the example of semantic trajectory in Section 1 (with semantics on both stops and moves). It is difficult to adapt these works to different types of moving objects (e.g., vehicles and people trajecto- ries), or to trajectories crossing geo-objects of different kinds (e.g. lines and regions and points). Moreover, inferring such heterogeneous semantics needs multiple geo- graphic data sources to be combined meaningfully. Our objective is to create a holistic framework for end-to-end computation and annotation of heterogeneous trajectories. 3. HYBRID SPATIO-TEMPORAL & SEMANTIC TRAJECTORY MODEL
Show more

34 Read more

Semantic trajectories: Mobility data computation and annotation

Semantic trajectories: Mobility data computation and annotation

For the initial data preprocessing, researchers have designed algorithms for cleaning (i.e., dealing with data errors and outliers) and compression. For example, Marketos et al. propose a parametric online approach that filters noisy positions (outliers) by taking advantage of the maximum allowed speed of the moving object [Marketos et al. 2008]. On the other hand, random errors are small distortions from the true values and their influence is decreased by smoothing methods (e.g., Jun et al. [2006] and Sch ¨ ussler and Axhausen [2009]). Additionally, many works study trajectory data compression. For instance, Meratnia and de By design the opening window techniques for online compression, among which there are two choices in threshold violation, that is, us- ing the point that causes the violation (NOPW—normal opening window) or using the point just before the violation (BOPW—before opening window) [Meratnia and de By 2004]. Different from these works, our semantic trajectory computation and annota- tion platforms can support more efficient one-loop data cleaning and semantic data compression. Recent progress has been made for semantic trajectory reconstruction for real-time movement streaming data [Yan et al. 2011b].
Show more

38 Read more

MEMORY-BASED MULTI-PROCESSING METHOD FOR BIG DATA COMPUTATION

MEMORY-BASED MULTI-PROCESSING METHOD FOR BIG DATA COMPUTATION

The rapid growth of the Internet in addition to the evolution of electronic businesses and applications have led to the rising of large volume of datasets and to million or even billion of distributed data. These data are often unstructured, complex, and way too large for traditional data management applications to adequately deal with and process. Thus, the term Big Data was coined which refers to the data assets characterized by such a high volume, velocity, and diversity to require specific technologies and analytical methods for their transformation into information [1]. It has been estimated that data have roughly doubled every 40 months since the 1980s [2]. As of 2012, every day 2.5 Exabytes of data are generated [3]. Moreover, the global data volume will grow exponentially from 4.4 to 44 Zettabytes between 2013 and 2020 [4]. By 2025, there will be 163 Zettabytes of data stored and shared among digital systems [5]. Today, Big Data is perceived as having at least three shared characteristics: Extremely large volumes of data, extremely high velocity of data, and extremely wide variety of data [6]. In fact, organizations today are at a tipping point in data management. They have stirred from the era where the technology was used to support a particular business need, such as determining how many items were sold or how many items are still in stock, to a time when organizations have more data from more sources than ever before. Consequently, Big Data has become an eminent problem for large enterprises to manage their digital assets and resources [7]. For this reason, creating new ways, methodologies, and algorithms for managing Big Data is what every industry must seek if it needs to survive for the next era of computing and thereby is regarded as the most important upcoming challenge facing the world of information technology [8]. This paper proposes a Memory-Based, Multi-Processing, and One-Server method for processing Big Data. It is memory- based as it allocates data in computer's high-speed primary memory prior to processing using high-performance data structures such a hash tables. It is multi-processing as it exploits parallel programming techniques to manipulate data over multi-processor/core systems using multithreading. It is
Show more

6 Read more

HIGH DIMENSIONAL DATA COMPUTATION USING ZINC EXPERIMENTS

HIGH DIMENSIONAL DATA COMPUTATION USING ZINC EXPERIMENTS

Synthetic datasets: We generated three types of synthetic data-sets according to the methodology in. For TO domains, we used the same data generator as [8] to generate synthetic datasets with different distributions. For PO domains, we generated DAGs by varying three parameters to control their size and complexity: height (h), node density (nd), and edge density (ed), where h 2 Z+, nd; ed 2 [0; 1]. Each value of a PO domain corresponds to a node in DAG and the dominating relationship between two values is determined by the existence of a directed path between them. Given h, nd, and ed, a DAG is generated as follows. First, a DAGis constructed to represent a poset for the powers et of a set of h elements ordered by subset containment; thus, the DAG has 2h nodes.Next, (1 - nd) *100% of the nodes (along with incident edges)are randomly removed from the DAG, followed by randomly removing (1 -ed) *100% of the remaining edges such that the resultant DAG is a single connected component with a height of h. Following the approach in [8], all the PO domains for a dataset are based on the same DAG. Table 2 shows the parameters and their values used for generating the synthetic datasets, where the first value shown for each parameter is its default value. In this section,default parameter values are used unless stated otherwise.
Show more

10 Read more

Data, Computation and Creative Collaboration: Reflections on interdisciplinary digital design and research

Data, Computation and Creative Collaboration: Reflections on interdisciplinary digital design and research

disciplinary, and ranged from the technical to the qualitative and from the personal to the curatorial. On a thematic level, specific narratives within the data revealed points of divergence and agreement between the generations, and challenged stereotypical views of generational engagement with and participation in democracy. On a practical front we shared an interest in materialising this data, literally making it present in the exhibition space but also of revealing its inherent dramas of conflict and change, and emphasising its personal implications. Our final realisation responded to these emerging matters of concern. In line with Latour, we sought to stage the data as embodied and affective, hoping to provoke reflection on matters of concern and to ‘entangle’ the audience in them, rather than pursuing an idealised reading of specific matters of fact. Once again the ability to create tangible representations of abstract data was central and formative. Even though the data in this case was extremely simple, the process of transforming those values into explicit spatial forms was critical. Three dimensional sketches visualised the data ‘in situ’ and provided tools to structure the Generations Room, including points of interchange and integration with other exhibition elements and systems.
Show more

11 Read more

Computation of Big Data in Hadoop and Cloud Environment

Computation of Big Data in Hadoop and Cloud Environment

All the above mentioned Information makes as realise even though that big data can be processed but still for small and medium sized industries to opt for such technologies is very expensive and out of dream for them.” Besides this Big data possess various other problems in the field of Data Processing and resource Management, Data Integration, Data Storage, Data visualization and user Interaction, Model building and scoring. The organized approach toward data collection in order to enhance randomness in data sampling and reduce favouritism is not apparent in the collection of Big Data sets [20]”. Big Data sets do not naturally reject data bias. The data collected can still be imperfect and inaccurate which, in turn, can lead to twisted conclusions. Twitter which is commonly inspected for insights about user feelings , that there is a natural problem with using Twitter because as a data source as only 40% of Twitter’s active users are just listening and not contributing. This can suggest that the tweets are coming from a certain type of people (perhaps people who are more vocal and participative in social media) than from a true haphazard sample [21].Twitter makes a sample of its ingredients available to the public by its streaming Application Programming Interfaces (APIs).It is not however clear how actually sample of materials is derived [22]. In broader terms there are three areas of problems Associated with big Data which include Big Data Computation And Management, Big Data computation and Analysis and, BigData security. The Solution that is reliable and cheap in solving the case of BigData to benefit every class is Cloud Computing [43].
Show more

9 Read more

Rapid computation and visualization of data from Kano surveys in R

Rapid computation and visualization of data from Kano surveys in R

In this section, a R function (called kano) for conducting the analysis explained above is proposed. The kano func- tion does three main things: (1) classification of product features into Kano classes (Table 1), (2) calculation of CS and DS values, and (3) plotting of relationships functions between individual product features and customers’ sat- isfaction. To provide a reproducible example, we consider a dataset containing Kano data for six features of a hypo- thetical product. The present section provides a step-by- step walkthrough of its analysis in R. Both the dataset and the code used for the analysis are provided as Additional files 1 and 2.
Show more

6 Read more

Aggregate data and the prohorov metric framework: efficient gradient computation

Aggregate data and the prohorov metric framework: efficient gradient computation

In this paper we consider the case of non-parametric estimation of a probability measure under the Prohorov Metric Framework in a least squares problem. It is demonstrated that the gradient computation can be reduced by exploiting the linearity of the coefficients to be estimated which appear in the approximation schemes under the PMF.

11 Read more

Scalable Contour Tree Computation by Data Parallel Peak Pruning

Scalable Contour Tree Computation by Data Parallel Peak Pruning

How well does PPP scale with varying data size? Figure 16 shows the performance of PPP (VTK-m) at varying numbers of threads for all scaled versions of the GTOPO dataset. Figure 17 (top) also shows the curves for G(1.0) in serial and using 64 threads. First, we observe that the slope of the curves for PPP (VTK-m) in Figure 17 (top) is similar to the gray line showing the slope for perfect lin- ear scaling. For additional detail, we can further see from Figure 16 that the increase in compute time between G(0.5) to G(1.0) is on the order of 3.5 to 3.86× and ≈ 4.5× for G(0.25) to G(0.5) across all numbers of threads. As the mesh size increases by a factor of 4 be- tween each scaled version, these results indicate that PPP scales well with increasing mesh size. Figure 18 (top) furthermore confirms that the relative times per compute phase of the algorithm are similar for the different scaled versions of GTOPO. This is expected given the similar topology of the datasets and gives further evidence that the algorithm and its different compute phases overall behave well with growing data size. Finally, Figure 18 (bottom) shows the speed-ups using 64 threads on Haswell compared to serial for all scaled GTOPO datasets. We observe that speed-ups improve across all phases of the algorithm as the data size increases, in particular from G(0.03125) to G(0.25), with speed-ups leveling off afterwards. This indicates that for the smaller scaled datasets (G(0.125) and smaller) there is likely simply not enough work to utilize all available compute and memory resources while scaling is consistent for the larger datasets.
Show more

26 Read more

Prediction Of Misclassification Data Based On Cognitive Computation Approach (CCA)

Prediction Of Misclassification Data Based On Cognitive Computation Approach (CCA)

In quantitative analysis, missing data create unavoidable problem in real world large datasets. Due to the issues the conclusion of the computational process cause bias outcome, increasing rate of error data, and more inconvenient to attain the process of imputation. Prediction model is one of the elegant methods for managing missing data. This article introduced, the most powerful approaches for the prediction of misclassification data using Machine Learning (ML) techniques. Also it explores the study of Adaptive Computation and Pattern Knowledge Theory using effective Cognitive Computation Approach (CCA). Several strategies describe the classification of predictive techniques using efficient Supervised Machine Learning Algorithm. Main goal is to provide general guidelines on selection of suitable data imputation algorithms and also implementing Cognitive Approach in Machine Learning Techniques. The proposed approach generated more precise, accurate results than the other predictive approaches. The Experimental results performed both real and synthetic dataset, proved that the proposed approach offers valuable and optimistic insight to the prediction of misclassification information.
Show more

8 Read more

Secure and practical computation on encrypted data

Secure and practical computation on encrypted data

the cloud provider prevents its widespread adoption. Standard encryption provides confidentiality but renders data unsearchable. There has been a lot of work on Symmetric Searchable Encryption (SSE) that enables efficient search on encrypted data. The existing schemes are incompatible with cloud storage APIs. All prior schemes required both cloud storage and compute services, which increases cost, attack surface, and response time. More importantly, due to high latency and mon- etary cost of outbound data transfer, it limits customers to cloud providers offering both storage and compute services, such as Amazon, and excludes storage-only services, such as Dropbox. We developed the first SSE system that is compatible with cloud storage APIs [ 2 ]. We designed a novel secure storage primitive, called Blind Storage, and used it as a black box to construct an SSE scheme. Blind Storage is designed to protect information against the cloud service providers and adversaries that steals the data. The system is very efficient, based on standard cryptographic primitives (SHA256 and AES), secure in the standard model, and simple to implement. Despite all the computation is done by the client, and both client and server do computation in other schemes, Our SSE scheme is one of the most efficient SSE schemes for single keyword queries; it has less than 10% overhead over plaintext search. It supports additions and deletions of documents. The system can also be used to protect local storage infrastructure to avoid data breaches. We built Blind Storage system and SSE on top of it in C++ and the code is open source .
Show more

145 Read more

Fast Computation on Processing Data Warehousing Queries on GPU Devices

Fast Computation on Processing Data Warehousing Queries on GPU Devices

After the column-store tables are organized, they are connected to the queries and converted to a program that can be compiled by using Ysmart [16]. Ysmart is a standalone GPU execution engine for warehouse style queries. The front end consists of a query parser and optimizer. It translates an SQL query into an optimized query plan tree. This is then used by the query generator to populate a driver program, which controls the query generated flow. It is combined with the GPU operator library to make an executable query binary. The query binary then reads table data from a column format file on the disk storage and causes the GPU operators to offload data to GPUs for more efficient processing. Then the results are placed into rows and returned back to the user.
Show more

43 Read more

Effective Computation of Biased Quantiles over Data Streams

Effective Computation of Biased Quantiles over Data Streams

to per-packet processing time and packet loss using the User-Defined Aggregation Function (UDAF) facility of the Gigascope DSMS, a highly optimized system for monitor- ing very high speed data streams [CJSS03]. Gigascope has a two-level query architecture: at the low level, data is taken from the Network Interface Card (NIC) and is placed in a ring buffer; queries at the high level then run over the data from the ring buffer. Gigascope creates queries from an SQL-like language (called GSQL) by generating C and C++ code, which is compiled and linked into executable queries. To integrate a UDAF into Gigascope, the UDAF functions are added to the Gigascope library and query generation is augmented to properly handle references to UDAFs; for more details, see [CKM + 04].
Show more

12 Read more

DATA INTEGRITY PROOF AND SECURE COMPUTATION IN CLOUD COMPUTING

DATA INTEGRITY PROOF AND SECURE COMPUTATION IN CLOUD COMPUTING

There are numerous security and privacy (Pearson, 2009) issues for cloud computing as it encompasses many technologies including networks, databases, operating systems, virtualization, resource scheduling, transaction management, load balancing and memory management. These issues fall into two broad categories-security issues faced by cloud providers and security issues faced by their customers. In most cases, the organizations providing software, platform, or Infrastructure as-a-Service via the cloud must ensure that their infrastructure is secure and that their clients data and applications are protected. The customer must also ensure that the provider has taken the proper security measures to protect their information. Cloud computing moves the application software and databases to the large data centers, where the management of data and services may not be trustworthy. This unique attribute possess many new security challenges. The world of cloud computing offers many benefits like limitless flexibility, better reliability, enhanced collaboration, portability and simpler devices. To enjoy the full benefit of cloud computing, we need to address the privacy and security concerns. In this study, the cloud security is divided into two classes.
Show more

9 Read more

Efficient Computation of the Well Founded Semantics
over Big Data

Efficient Computation of the Well Founded Semantics over Big Data

In future work, we plan to study more complex knowledge representation methods including Answer-Set programming (Gelfond (2008)), and RDF/S ontology evolution (Konstantinidis et al. (2008)) and repair (Roussakis et al. (2011)). We believe that these complex forms of reasoning do not fall under the category of “embarrassingly parallel” problems for which MapReduce is designed, and thus, a more complex computational model is required. Parallelization techniques such as OpenMP 2 and Message Passing Interface (MPI) may provide higher degree of flexibility than the MapReduce framework, giving the opportunity to overcome arising limitations. In fact, in Answer-Set programming, the system claspar (Gebser et al. (2011)) uses MPI, but it needs a preliminary grounding step, as it accepts only ground or propositional programs. (Perri et al. (2013)) uses POSIX threads on shared memory for parallelized grounding. Combining these two approaches and making them more data-driven would be an interesting challenge.
Show more

14 Read more

Cloud Data Outsourcing and off sourcing using trust computation

Cloud Data Outsourcing and off sourcing using trust computation

The encryption time is measurement of time interval, computed between initialization of the encryption process and the end of process. That is also termed as the encryption time complexity. Figure 10 and table 5 shows the encryption time of both the techniques (i.e. proposed and traditional system (RSA algorithm)). In this diagram X axis shows file size used in experimentation and Y axis shows amount of time consumed, for processing input file size. The performance of proposed system is given using blue line, and traditional algorithm is represented using red line. According to given results proposed system consumes less time as compared to RSA algorithm. The result show amount of time consumed is depends on the amount of data to be process. The respective performance of system shows their effectiveness over the traditional RSA algorithm. In order to identify the computational time overhead more clearly the mean time consumption of the system is computed and demonstrated using the figure 11. The given diagram includes the encryption time in Y axis of diagram, in terms of KB, and the X axis shows the techniques implemented. According to the given observations the performance of the traditional RSA algorithm is cost effective as compared to the proposed cryptographic technique.
Show more

8 Read more

Modeling of Imperfect Data in Medical Sciences by Markov Chain with Numerical Computation

Modeling of Imperfect Data in Medical Sciences by Markov Chain with Numerical Computation

In this paper we consider sequences of observations that irregularly space at infrequent time in- tervals. We will discuss about one of the most important issues of stochastic processes, named Markov chains. We would reconstruct the collected imperfect data as a Markov chain and obtain an algorithm for finding maximum likelihood estimate of transition matrix. This approach is known as EM algorithm, which includes main optimum advantages among other approaches, and consists of two phases: phase (maximization of target function). Continue the phase E and M to achieve the sequence convergence of matrix. Its limit is the optimal estimator. This algo- rithm, in contrast with other optimum algorithms which could be used for this purpose, is practi- cable in maximum likelihood estimate, and unlike to the methods which involve mathematical, is executable by computer. At the end we will survey the theoretical outcomes with numerical com- putation by using R software.
Show more

7 Read more

No Data, No Computation, No Replication or Re-use: the Utility of Data Management and Preservation Practices for Computational Journalism

No Data, No Computation, No Replication or Re-use: the Utility of Data Management and Preservation Practices for Computational Journalism

are also data sets with possible use cases for journalists. Data sets might be a “handful of individual items towards a news story” or “gigabytes of data'' [4]. They can be in databases, flat files, or even handwritten documents. Additionally, the realm of “big data” includes government data, but also scientific, commercial and private sources, e.g. sensor network data, social media, market research, internet of things, email, company records, large corpora of texts from books, journals, journal databases and even news sources. With web scraping tools and the Internet Archive’s Wayback Machine, we have access to a diverse universe of “volume, velocity, and variety as vast as the Internet itself.''[3]. Data documentation (metadata), scripts, api’s, computational methods, algorithms, even the STATA do files are also considered part of the data which should also be made available for researcher, through systems of preservation and archiving.
Show more

5 Read more

EnerJ: Approximate Data Types for Safe and General Low-Power Computation

EnerJ: Approximate Data Types for Safe and General Low-Power Computation

SRAM supply voltage. Registers and data caches consist of static RAM (SRAM) cells. Reducing the supply voltage to SRAM cells lowers the leakage current of the cells but decreases the data in- tegrity [13]. As examined by Kumar [18], these errors are domi- nated by read upsets and write failures, which occur when a bit is read or written. A read upset occurs when the stored bit is flipped while it is read; a write failure occurs when the wrong bit is written. Reducing SRAM supply voltage by 80% results in read upset and write failure probabilities of 10 −7.4 and 10 −4.94 respectively. Soft failures, bit flips in stored data due to cosmic rays and other events, are comparatively rare and depend less on the supply voltage. Section 5.4 describes the model we use to combine these various potential energy savings into an overall CPU/memory system en- ergy reduction. To put the potential energy savings in perspective, according to recent studies [12, 24], the CPU and memory together account for well over 50% of the overall system power in servers as well as notebooks. In a smartphone, CPU and memory account for about 20% and the radio typically close to 50% of the overall power [6].
Show more

11 Read more

Computation and Management of Big Data Using Cloud Map Table Framework

Computation and Management of Big Data Using Cloud Map Table Framework

Here we can tweak the Data distribution center such that it forms the information at that point parts the information so we can figure or process the information per the need. Part of information is same as Hadoop does to process it by the machine or foundation that has bigger limit of preparing progressively that the split size of information. The explanation for the part is that when we have tremendous measures of information to process and we don't have such a preparing ability we should part the information and process it else it will be waste. Concurring, to the Algorithms, given a capacity to register on n inputs the Divide – and-Conquer methodology recommends part the contribution to k particular subsets, 1< k<=n, yielding k sub issues. These sub Problems must be explained, and after that a strategy must be found to consolidate sub arrangements into an answer of the gap. On the off chance that the sub issues are moderately expansive, at that point Divide-and-overcome procedure can be reapplied. Regularly the sub issues coming about because of the separation and vanquish configuration are of an indistinguishable sort from that of unique issue. For those cases the reapplication the Divide-and-Conquer rule is normally communicated by a recursive calculation. Presently littler and littler sub Problems of a similar kind are created until in the end sub Problems that are sufficiently little to be comprehended without part are delivered. as of now said BigData advances don't give any help to Multidimensional perspective of information which implies that there is no help to make or discover relations, co- connection and examples with existing innovations to do as such [1].
Show more

6 Read more

Show all 10000 documents...