large data volume processing

Top PDF large data volume processing:

Simulation to maximize retransmission of packets in WMSN

Simulation to maximize retransmission of packets in WMSN

The large volume of the multimedia data is given by the sensor nodes, both processing and transmission of data leads to higher levels of energy consumption than in any other types of wireless sensor networks (WSN). This requires the designing of energy aware multimedia processing algorithms and energy efficient communication [1, 2] in order to maximize network lifetime while meeting the QoS constraints. A few protocols have been proposed to achieve image transmission over WSN [1]–[12].

10 Read more

High speed, high volume, optimal image subtraction for large volume astronomical data pipelines

High speed, high volume, optimal image subtraction for large volume astronomical data pipelines

Computing power continues to grow in approximate agreement with Moore's Law (~2x every 1.5-2 years), and astronomy data acquisition rates are growing as fast or faster. Driven by advances in camera design, image sizes have increased to over a gigabyte per exposure in some cases, and the exposure times per image have decreased from hours to seconds, or even down to video frame rates of multiple exposures per second. All of this is resulting in terabytes of science data per night in need of reduction and analysis. New telescopes and cameras already under construction will increase those rates by an order of magnitude. The application of emerging low-cost parallel computing methods to OIS and other image processing techniques provides a present and practical solution to this data crisis. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer
Show more

277 Read more

Performing Customer Behavior Analysis using Big Data Analytics

Performing Customer Behavior Analysis using Big Data Analytics

Here Big data is a collection of unstructured data that has very large volume, comes from variety of sources like web ,business organizations etc. in different formats and comes to us with a great velocity which makes processing complex and tedious using traditional database management tools .It can be termed as a growing torrent. So the major demanding issues in big data processing include storage, search, distribution, transfer, analysis and visualization. Earlier, the term 'Analytics' indicated the study of existing data to research about potential trends and to analyze the effects of certain decisions or events that can be used for business intelligence to gain various valuable insights. Today's biggest challenge is how to discover all the hidden information through the huge amount of data collected from a varied collection of sources. There comes Big Data Analytics into picture. One of them is the customer behavior analysis which is referred as customer analytics.
Show more

7 Read more

An Efficient Methodology To Sort Large Volume Of Data

An Efficient Methodology To Sort Large Volume Of Data

Abstract—Sorting is a basic data processing technique that is used in all day-day applications. To cope up with technological advancement and extensive increase in data acquisition and storage, Sorting requires improvement to minimize time taken for processing, response time and space required for processing. Various sorting techniques have been proposed by researchers but the applicability of those techniques for large volume of data is not assured. The main focus of this work is to propose a new sorting technique titled Neutral Sort, to reduce the time taken for sorting and decrease the response time for a large volume of data. Neutral Sort is designed as an enhancement to Merge Sort. The advantages and disadvantages of existing techniques in terms of their performance, efficiency and throughput are discussed and the comparative study shows that Neutral sort drastically reduces time taken for sorting and hence reduces the response time.
Show more

5 Read more

Survey on Hadoop and Introduction to YARN

Survey on Hadoop and Introduction to YARN

Abstract— Big Data, the analysis of large quantities of data to gain new insight has become a ubiquitous phrase in recent years. Day by day the data is growing at a staggering rate. One of the efficient technologies that deal with the Big Data is Hadoop, which will be discussed in this paper. Hadoop, for processing large data volume jobs uses MapReduce programming model. Hadoop makes use of different schedulers for executing the jobs in parallel. The default scheduler is FIFO (First In First Out) Scheduler. Other schedulers with priority, pre-emption and non-pre-emption options have also been developed. As the time has passed the MapReduce has reached few of its limitations. So in order to overcome the limitations of MapReduce, the next generation of MapReduce has been developed called as YARN (Yet Another Resource Negotiator). So, this paper provides a survey on Hadoop, few scheduling methods it uses and a brief introduction to YARN.
Show more

6 Read more

Evolution Mining: A Novel Accelerated Framework for Large Data processing in the big Data Paradigm

Evolution Mining: A Novel Accelerated Framework for Large Data processing in the big Data Paradigm

Recently a lot of data updates and data evolution is exploiting in the cloud based server which consider as big data problem. Big Data is manifested in three different issues such as velocity, Variety and Volume of the data handling and analysing. The data handling and analysis leads to integration problem, computation problem, data placement problem, and finally Memory related problems. The Map Reduce paradigm is employed to handle large volume of data with high velocity. The map reduce functions used in production because of its simplicity, generality, and maturity.
Show more

7 Read more

A Large Volume of Data Handling Activity: Review of Knowledge Discovery Process

A Large Volume of Data Handling Activity: Review of Knowledge Discovery Process

KNOWLEDGE REPRESENTATION: Knowledge Representation is the final phase in which the discovered knowledge is visually represented to the user. This essential step uses visualization techniques to help users understand and interpret the data mining results. It is common to combine some of these steps together. For instance, data cleaning and data integration can be performed together as a pre processing phase to generate an data warehouse. Date selection and data transformation can also be combined where the consolidation of the date is the result of the selection, or, as for the case of data warehouses, the selection is done on transformed data. The KDD is an iterative process. Once the discovered knowledge is presented to the user, the evaluation measure can be enhanced, the mining can be further refined, new data can be selected or further transformed, or new data sources can be integrated, in order to get different, more appropriate results. Data mining derives its name from the similarities between searching for valuable information in a large database and mining. Both imply either sifting through a
Show more

6 Read more

Design and Analysis of Large Data Processing Techniques

Design and Analysis of Large Data Processing Techniques

Sensor Data Analytics - When the data from the different sources is collected then it can be multi-media files like video, photo and audio, from which important conclusions for the business can be drawn. For example, on the roads, the data from the car’s black boxes is collected if the vehicles met some accidents. There are huge text files including endless logs from IT systems, notes and e-mails which contain indicators that businesses are keen on. One more thing is very important to understand that the vast number of sensors built into smartphones, vehicles, buildings, robot systems, appliances, smart grids and whatever devices collecting data in a diversity which was unbelievable in the past. These sensors represent the basis for the ever evolving and frequently quoted Internet of things. All this data can be analyzed by the MapReduce. To address this issue, MapReduce has been used for large scale information extraction. [16,17]on the other hand, due to the rapid data volume increasing in recent years at customer sites, some data such as web logs, call details, sensor data and RFID data can not be managed by Teradata partially because of its expensiveness to load large volumes of data to a RDBMS[18]. In this section, the parameters elaborated on MapReduce are applicable to Bulk Synchronous Processing as this technique is implemented via MapReduce
Show more

5 Read more

Real-Time Big Data Analytics using Hadoop

Real-Time Big Data Analytics using Hadoop

Abstract-- Recent Technology have led to a large volume of data from different areas(ex. medical, aircraft, internet and banking transactions) from last few years. Big Data is the collection of this field’s information. Big Data contains high volume, velocity and high variety of data. For example Data in GB or PB which are in different form i.e. structured, unstructured and semi-structured data and its requires more fast processing or real-time processing. Such Real time data processing is not easy task to do, Because Big Data is large dataset of various kind of data and Hadoop system can only handle the volume and variety of data but for real-time analysis we need to handle volume, variety and the velocity of data. To solve or to achieve the high velocity of data we are using two popular technologies i.e. Apache Kafka, which is message broker system and Apache Storm, which is stream processing engine.
Show more

5 Read more

Building Efficient Large-Scale Big Data Processing Platforms

Building Efficient Large-Scale Big Data Processing Platforms

In the era of big data, we envision that large-scale big data computing systems become ubiquitous to serve a variety of applications and customers. It is motivated by two factors. First, big data processing has shown the potential of benefiting many applications and services ranging from financial service, health applications, customer analysis, to social network applications. With more and more daily generated data, powerful processing ability will become the focus for research and development. Sec- ond, with the rise of cloud computing, it is now inexpensive and convenient for regular customers to rent a large cluster for data processing. Therefore, how to improve the performance in terms of execution times is the top issue on the list, especially when we imagine that a cluster computing system will often serve a large volume of jobs in a batch. In this dissertation, we mainly investigate the common characteristics of large-scale big data computing systems and aim to improve their efficiency and performance through more effective resource management and scheduling.
Show more

131 Read more

Using Very Large Volume Data Sets for Collaborative Systems Study

Using Very Large Volume Data Sets for Collaborative Systems Study

Document processing involves extracting in- formation contained in the data fields and converts them to series-bit format that allows storing them in a database. Forms processing is considered complete when all information in the documents have been produced, veri- fied and saved in a database. Collection of data on forms is made either involving an appreciable number of people who collect them and bring them in a program using the computer keyboard, or by entering data au- tomatically in the system.

9 Read more

A Large-Scale Web Data Collection as a Natural Language Processing Infrastructure

A Large-Scale Web Data Collection as a Natural Language Processing Infrastructure

After selecting Japanese web pages, we have converted these web pages into Web standard format data through the conversion procedure described in Section 2.2. As results of existing NLP tools, we added the results of the Japanese parser KNP (Kurohashi and Nagao, 1994). In the conver- sion process, the Japanese web pages are organized into 10,000 page sets (i.e., one page set consists of 10,000 web pages.) The page sets were processed by 162 cluster ma- chines in parallel. Each cluster machine consists of 4 CPU cores and 4 GB main memory. To submit these jobs to the cluster machines, we used a grid shell GXP2 (Kaneda et al., 2002). It took two weeks to finish the conversion.
Show more

6 Read more

A Peer-To-Peer based Large-Scale Data  processing Using MapReduce

A Peer-To-Peer based Large-Scale Data processing Using MapReduce

have to make a contribution or they share regular interest. As per growing trade tendencies and highest used of cloud computing, the new method evolved in new stage of progress toward cloud enabled procedure. In this method based on peer to peer approach develop data sharing service in shared network. This procedure is the combo of cloud computing, databases and peer to peer based technologies in this paper, we gift expanded BestPeer, a system which give flexible data sharing services for the industrial network functions in the cloud based on BestPeer a peer-to-peer (P2P) based data administration platform. Through Combining cloud computing, database, and P2P technology, improved BestPeer achieves its query processing efficiency in a pay-as-you-go manner. We overview improved BestPeer on Amazon EC2 Cloud platform.
Show more

5 Read more

ObspyDMT: a Python toolbox for retrieving and processing large seismological data sets

ObspyDMT: a Python toolbox for retrieving and processing large seismological data sets

ArcLink is an older data request protocol that arose in Europe in order to virtually consolidate distributed seis- mological data holdings across various European countries. It is a distributed request protocol developed by the Ger- man WebDC initiative of GEOFON and BGR (Bunde- sanstalt für Geowissenschaften und Rohstoffe) as a contin- uation of the NetDC concept originally developed by the IRIS DMC. ArcLink communicates via TCP/IP rather than via supervision-intensive email or FTP requests required by other access mechanisms at the time. It accesses waveform data in miniSEED or SEED format and associated meta- information as dataless SEED files. At the time we developed ObsPyLoad, a pre-cursor of obspyDMT (Scheingraber et al., 2013), only a few data centers were implementing FDSN web services. Hence, ArcLink clients greatly expanded the reach of ObsPyLoad, to include most European data cen- ters. ObsPyLoad contacts the ORFEUS DMC via ArcLink, which in turn “forwards” ArcLink requests to other data cen- ters across Europe. This ArcLink functionality is retained in obspyDMT, but if a data center implements both interfaces, then obspyDMT accesses it via web services (default), which
Show more

24 Read more

Comparison of Methods for Processing Missing Values in Large Sample Survey Data

Comparison of Methods for Processing Missing Values in Large Sample Survey Data

In particular, missing data mechanisms are generally classified into three main categories which are missing completely at random (MCAR), missing at random (MAR) and missing not at random (MNAR) [13]. The missing data mechanisms have implications for the choice of methods to handle missing data. For MI, it can provide unbiased estimates of the regression parameter of interest when the missing data is MAR or MCAR. Recent study found when the missing data is assumed to be MAR or MCAR, CCA was performed well (e.g., unbiased risk difference, 95% coverage) [5], although some papers indicated that the method can result in substantially bias [3, 9, 19]. The application of SI is the same as CCA and MI, for example, inverse probability weighting (IPW) is typically implemented assuming MAR [20-21]. More information about implication of these methods under MNAR was discussed in the paper [22].
Show more

8 Read more

Hadoop and Its Role in Modern Image Processing

Hadoop and Its Role in Modern Image Processing

After that, this request is sent to each node. These nodes, that we call them task trackers, perform data pro- cessing independently and in parallel by running Map function [12]. After the task trackers’ works is finished, the results will be stored on the same node. Obviously, the intermediate results would be local and incomplete because they depend on the data available on one node. After preparation of the intermediate results, the job tracker sends the Reduce request to these nodes. Therefore, it performs the final processing on the results and the result of user’s request would be saved in a final compute node. At this point, MapReduce is finished, and further processing of the results should be performed by Big Data analysts. This processing can be performed directly on the results or classical methods of data analysis can be used by transferring the resulting data into a relational databases or data warehouse [13].
Show more

8 Read more

ABSTRACT

ABSTRACT

As many of the above references discuss, the current statistics curriculum often lacks data analysis (Tukey, 1962; Nolan and Temple Lang, 2010). Real data analysis makes the discipline more concrete to students. Focus on solving a real problem can be engaging to students who might otherwise find the subject boring. Teaching data analysis is challenging but it’s challenging in the way that teaching the practice of engineering or using the scientific method is challenging. By not giving students practice doing data analysis for a real problem, the statistics curriculum may encourage students to view statistical methodology as a hammer to be procedurally applied to data. It’s well established in engineering and the physical sciences that students should get some practical experience doing the thing during their education: why does the same principle not apply more often statistics at the undergraduate and graduate level? When teaching statistical modeling it might be more effective to first introduce the model (e.g. linear/logistic regression) in terms of a predictive context instead of the traditional inferential context.
Show more

7 Read more

Large Scale Data Munging using SparkR

Large Scale Data Munging using SparkR

ABSTRACT: It is a known fact that 90% of all the data in the world has been generated over the last two years. The internet companies are swamped with data that can be aggregated and analyzed. Big Data and Distributed Computing is the next essential thing that the companies have to adapt to, for exploring their data to maximize their profits and increase business efficiency using data analysis. This is where analytical tools come into the picture; R is one such tool with amazing capabilities however it is widely publicized that the biggest limitation in the R is the data processing technique which is to load everything up in memory and process it. This not only limits the amount of data you can process but it also scales very badly for complex processes. In this paper we will present how to perform large scale data munging and its subsequent analysis with R.
Show more

6 Read more

Chapter 6 A Survey of Load Balancing Techniques for Data Intensive Computing

Chapter 6 A Survey of Load Balancing Techniques for Data Intensive Computing

Another policy may focus on dividing the region and make sure that the population is equal for each subdivided region. This policy while better than the equal sized spatial splits wills still results in an imbalance. Events are not equally distributed among all the entities and during the course of the simulation there is a lot of flux in the number of active individuals. Other commonly used schemes include random distributions and explicit spatial scattering [18] which has been explored in the context of a traffic simulation problem. The main idea in these schemes is to divide each complex computational region into smaller pieces. This works well in many situations but it also increases the communication footprint within the system. The communication overheads may become a bottleneck in situations where a large number of messages are being exchanged and also in situations where the network connecting the processing elements is bogged down resulting in higher latencies. In such situations dynamic load balancing is needed to reduce this imbalance.
Show more

12 Read more

On Distributed Deep Network for Processing Large-Scale Sets of Complex Data

On Distributed Deep Network for Processing Large-Scale Sets of Complex Data

years that makes the training of modestly sized deep networks practical. A known limitation of the GPU approach is that the training speed-up is small when the model does not fit in GPU memory (typically less than 6 gigabytes). To use a GPU effectively, researchers often reduce the size of the data or parameters so that CPU-to-GPU transfers are not a significant bottleneck [11] .

5 Read more

Show all 10000 documents...