heterogeneous data source change

Top PDF heterogeneous data source change:

A Coherent Distributed Grid Service for Assimilation and Unification of Heterogeneous Data Source

A Coherent Distributed Grid Service for Assimilation and Unification of Heterogeneous Data Source

This tool is used to install the grid service in the grid member’s machine. This is a very user friendly installer and most of the tasks are automated. The centre’s schema definition file which is called ‘centre.xml’ is provided with the installer. The grid members may have different types of DBMS. So when the installer is started the person who is installing has to choose the DBMS where the data are stored. After choosing the appropriate DBMS the installer will help the person to choose the database, tables and columns. This will also display the centre’s tables and columns information with proper description. This information is taken from the centre.xml file. The users have to map its tables and columns with the centre’s tables and column. After completing the mapping an XML file will be created called ‘GridMapping.xml’ where the mapped information will be saved. After that the installer will display a registration page. In this page the grid member/ organization name, network address and port number have to be provided. The IP address of the current computer is automatically read by the installer. It also shows a default port address. But the person who is installing the grid has freedom to change it. After registering a confirmation will be provided to the person and then the grid service for that database will be automatically started by the installer. Figure 3 displays the overall steps of the grid service installer.
Show more

9 Read more

A new GIS toolbox for integrating massive heterogeneous GIS data for land use change analysis

A new GIS toolbox for integrating massive heterogeneous GIS data for land use change analysis

Thünen-Institute of Rural Studies, Braunschweig, Ger- many. birgit.laggner@ti.bund.de Abstract Agricultural land use in Germany and related im- pacts on the environment and the use of natural resources are key research topics at the Thünen- Institute of Rural Studies. As spatial context is es- sential for the analysis of causal connections, GIS data regarding all necessary information was gath- ered during different research projects and prepared for processing in a database. In particular, the In- tegrated Administration and Control System, which was available for certain project purposes for sev- eral Federal Laender and years, serves as a very detailed data source for agricultural land use. We use different Open Source GIS software like Post- greSQL/PostGIS, GRASS and QuantumGIS for geo- processing, supplemented with the proprietary ESRI product ArcGIS. After introducing the used input data and the general processing approach, this pa- per presents a selection of geoprocessing routines for which Open Source GIS software was used. As an ex- emplary ’use case’ for the conclusions from the con- secutive statistical analysis, we summarize impacts of increased biogas production on agricultural land use change highlighting the trend in biogas maize cultivation and the conversion of permanent grass- land to agricultural cropland.
Show more

13 Read more

A Flexible Data Structure for Heterogeneous Information Integration

A Flexible Data Structure for Heterogeneous Information Integration

Generally, the integration of heterogeneous data can be implemented from two aspects of data structure and semantic. For example, the OEM model[1, 2] is for structured, semi-structured and un-structured data and is a simple self-description object model, which allows object nesting. Its variant model the OIM model [3-5] is also for structured and semi-structured data. However, OEM and its variants mix data value with data schema, leading to data repetition and waste of memory. The data models have to be modified accordingly with the change of data source, and their ability to express semantic relations among data is also very limited. Due to the characteristics of content self-description, platform independence, separation of content and display, and good expansibility, etc., XML is also suitable for describing heterogeneous data sources. A lot of XML-based data integration models are available, such as the XML tree data model in [6]; XML DTD is adopted as structural description for data interchange in the SIMS system[7]; the XML-based data integration model XIDM [8-10] is applied in the Panorama system; the SUDAI [11-13] model uses
Show more

8 Read more

From Source to Surface: The dynamics of heterogeneous mantle plumes

From Source to Surface: The dynamics of heterogeneous mantle plumes

The appearance of Loa and Kea, as geographically distinct volcanic tracks, ap- proximately coincides with a bend in the volcanic island chain (Fig. 4.1), indicat- ing a change in the relative motion between the Hawaiian plume and the Pacific plate. Although an azimuthal change in Pacific plate-motion is evident from high- resolution multi-beam bathymetric data along the Pitman Fracture Zone Auster- mann et al. (2011), it remains difficult to isolate in the record of stage Euler vectors from the recent and higher-resolution finite rotations of Wessel and Kroenke Wessel and Kroenke (2008). This is likely due to the increased impact of noise on the in- ferred kinematics Iaffaldano et al. (2012), which manifests as erratic changes in the Pacific Euler pole’s location since the Early Neogene (Fig. 4.3a). In Fig. 4.3b we illustrate the path of the Pacific Euler pole after mitigating the impact of data noise Iaffaldano et al. (2012) (see Supplementary Material to Paper 3, section 7.2.1). Our results show that over the past ∼ 10 Myr the Pacific pole has wandered northwards, causing a clockwise rotation and a progressively larger northwards component in the direction of plate-motion at Hawaii. The most rapid polar migration occurred over the past 4.2 Myr (Fig. 4.3d), coinciding with the appearance of double volcanic tracks across the Pacific plate and corroborating the prediction made by Hierony- mus and Bercovici Hieronymus and Bercovici (1999). As illustrated in Fig. 4.2b, when the plate changes direction its motion will initially be oriented at an oblique angle to the direction of plume tilting. As a consequence, shallow and deep melts rising vertically will interact with different regions of lithosphere, thus allowing them to erupt through geographically distinct volcanic edifices at the surface, which we predict constitute the Loa and Kea tracks, respectively (Fig. 4.2b and Fig. 4.2d). As the plume responds to the modified flow regime, it will eventually realign to the new direction of plate-motion Griffiths and Richards (1989), causing the lava types to overprint once again.
Show more

103 Read more

Source Characterization for Broadband Ground-Motion Simulation: Kinematic Heterogeneous Source Model and Strong Motion Generation Area

Source Characterization for Broadband Ground-Motion Simulation: Kinematic Heterogeneous Source Model and Strong Motion Generation Area

(Fig. 1). Synthetic waveforms for the best source models fit the observed data well in acceleration and velocity, while some displacement synthetics seem to be of smaller ampli- tude than the observations (Fig. 5). The estimated sizes of the strong motion generation areas, rupture starting points in the area, and the rise time are summarized in Table 3. Most of the source models were composed of a single strong motion generation area, except for the 13 May 1997 Kago- shima-ken Hokuseibu earthquake (Table 1, number 4m). We found that this event consisted of two strong motion gener- ation areas, which propagated along both conjugate fault planes. Horikawa (2001) also obtained two asperities, one on each fault plane by kinematic waveform inversion. Rup- ture propagation directions of the strong motion generation area in the frequency range of 0.2–10 Hz agreed with those of the asperities obtained from low-frequency ( 1 Hz) waveform inversion studies, reported by Okada et al. (1997) (numbers 1m and 2m), Ide (1999) (number 5m), Nakahara et al. (2002) (number 9m), Miyakoshi et al. (2000) (numbers 3m, 5m, and 9m), and Horikawa (2001) (numbers 3m and 4m). For the 1999 Shiga-ken Hokubu earthquake (number 12m), estimates of the size of the strong motion generation area and rise time were stable; however, we could not de- termine the fault plane from the strong ground motion simu- lation. There was no clear difference in the residuals and shapes of simulated waveforms for the two conjugate fault planes by moment tensor inversion.
Show more

15 Read more

A Survey on Data Placement in Heterogeneous Cloud Environment for Big Data

A Survey on Data Placement in Heterogeneous Cloud Environment for Big Data

Big data being a developing field has many research issues and challenges to address. The main research issues in big data are following: 1) Handling data volume, 2) Analysis of big data, 3) Privacy of data, 4) Storage of huge amount of data, 5) Data visualization, 6) Job scheduling in big data, 7) Fault tolerance. 1) Handling data volume: The large amount of data coming from different fields of science such as biology, astronomy, meteorology, etc make its processing very difficult computing to scientists. 2) Analysis of big data: it is difficult to analyze big data due to heterogeneity and incompleteness of data. Collected data can be in different formats, variety, and structure. 3) Privacy of data in the context of big data: There is public fear regarding the inappropriate use of personal data, particularly through linking of data from multiple sources. Managing privacy is both a technical and a sociological problem. 4) Storage of huge amount of data: it represents the problem of how to recognize and store important information, extracted from unstructured data, efficiently. 5) Data visualization: Data processing techniques should be efficient enough to enable real time visualization. 6) Job scheduling in big data: This issue focuses on efficient scheduling of jobs in a distributed environment. 7) Fault tolerance: is another issue in Hadoop framework in big data. In Hadoop, NameNode is a single point of failure. Replication of block is one of the fault tolerance technique used by Hadoop. Fault tolerance techniques must be efficient enough to handle failure in distributed environment. MapReduce provides an ideal framework for processing of such large datasets by using parallel and distributed programming approaches.
Show more

6 Read more

Classification of heterogeneous data based on data type impact of similarity

Classification of heterogeneous data based on data type impact of similarity

method showed a good performance for heterogeneous data classification, the authors did not consider the effects of homogeneous subsets on all relevant subspaces during training stage. Hsu et al in [18] studied a mixed data classification problem by propos- ing a method called Extended Naïve Bayes (ENB) for mixed data with numerical and categorical features. The method uses the original Naive Bayes algorithm for compu- ting the probabilities of categorical features: numerical features are statistically adapted to discrete symbols taking into consideration both the average and variance of numeric values. In [19] Li et al. proposed a new technique for mining large data with mixed numerical and nominal features. The technique is based on supervised clustering to learn data patterns and use these patterns for classifying a given data set. For calculating the distance between clusters. The author have used two different methods; the first method was based on using specific distance measure for each type of features, and then combined them in one distance. The second method was based on converting nom- inal features into numeric features, and then numeric distance is used for all features. In [20] Sun et al. presented a soft computing technique called neuro-fuzzy based clas- sification (NEF-CLASS) for heterogeneous medical data sets; the motivation at that time was based on the fact that most conventional classification techniques are able to handle homogeneous data sets but not heterogeneous ones. Their method has been tested on both pure numerical and mixed numerical and categorical datasets.
Show more

12 Read more

Data Source Data Size Bioinformatics Topics

Data Source Data Size Bioinformatics Topics

It is always difficult to present a rapidly moving field such as bioinformatics. Keeping abreast of new developments in bioinformatics is as important an activity as using the data themselves. Current awareness of the field is essential to ensure that all of the relevant available data are captured, maximizing research efficiency. Finally, the best approach to becoming proficient in the use of software tools is often trial and error, and bioinformatics is no exception; trial and error in silico can obviate the far less desirable prospect of trial and error in the laboratory, so do not be afraid to experiment with bioinformatics applications—see what the human genome can yield in your hands. Incorporating the usage of Software in Biological analysis is called ―Bioinformatics‖. Bioinformatics is a multidisciplinary field and requires people from different working areas. It is the combination of biology and computer science and is a new emerging field that helps in collecting, linking, and manipulating different types of biological information to discover new biological insight. Before the emergence of bioinformatics, all scientists working in different biological fields, such as human science, ecological science and many other fields, feel a necessity of some tool that helps them to work together. They knew they are all interlinked and had important information for each other, but they did not know how to integrate. In such circumstances, bioinformatics emerges to help these scientists or researchers in fast research and leads to quick inventions by providing readily available information with the help of computer technology. Scientist and researchers spend their whole life in inventing things for human benefits. After so many years of development, they have collected huge amount of valuable data from their experiments all over the world and still this collection is continue and will always continue for the better development of human being. Sometimes, they need to repeat the old research because either it is hard to obtain old data or they do not know whether it exist or not; this wastes their valuable time. Let us take an example of DNA identification. Every species or human beings have particular DNA strands that contain the genetic instructions used in the development and functioning of all known living organisms. By identifying DNA information one can trace generations’ links and can find the root of different disease. Earlier it was hard to manage this information. In order to collect and link DNA information from all over the world and to solve many medical complications, bioinformatics is a very helpful hand for them.
Show more

12 Read more

Seismic moment and volume change of a spherical source

Seismic moment and volume change of a spherical source

Müller (2001) noted that the discrepancy between Equations 1 and 2 could not be resolved, and he inter- preted these two forms to correspond to the limits of the actual volume of the source. Wielandt (2003) discussed the discrepancy and noted that the source volume change depends on the source geometry. Richards and Kim (2005) argued that each relationship is based on a different defi- nition of volume changes at the source and that Equation 1 is preferred for characterizing underground explosions.

10 Read more

Research on IT Architecture of Heterogeneous Big Data

Research on IT Architecture of Heterogeneous Big Data

Google is the advocator and promoter of big data processing, and its famous modules of Bigtable [1], GFS, MapReduce cause the many commercial systems and open source tools birth. Bigtable is a distributed structured data storage system, which is designed to handle massive amounts of data: usually the PB level data which is dis- tributed in the thousands of ordinary servers. GFS [2] is a scalable distributed file system. It is a dedicated file sys- tem designed to store massive amounts of search data. GFS is used for large-scale, distributed, large amounts of data access applications and runs on inexpensive com- modity hardware, but can provide fault tolerance, and higher overall performance of the services provided to a large number of users. MapReduce [3] is an algorithm model and associated implementation for processing and generating big data sets. Yahoo’s outstanding contribution to big data processing is that they created the open source framework Hadoop [4,5] based on the MapReduce model and promoted HBase, Hive, and other peripheral tech- nologies, making Hadoop the base of most current big data processing products. In our country, Baidu, Tencent, Alibaba and other Internet companies to provide ser- vices for the network has been faced with large data pro- cessing needs, but has also been developing and using large data processing technologies. However, solutions and frameworks purely providing data processing ser-
Show more

8 Read more

Data Migration In Heterogeneous Databases (ETL)

Data Migration In Heterogeneous Databases (ETL)

Abstract—Development of economic systems presents rapid growth trend and requires establishing a rapid trans-regional clearing system in banks. We are studying the approaches and processes of data migration and take a bank’s database as an example. The process of data migration has three methods. Data migrated by tools beforehand, data migrated manually beforehand, data generated by new system afterwards. It converts the data from source database to destination database and then migrate it successfully. The database migration is the lack of a well defined low risk and low cost strategy for moving enterprise data from one database to another over time. A high amount of data is being managed by databases and applications in companies today. The process of moving data from one database to another database is the data migration. The data are extracted from different databases and stored to another databases.
Show more

5 Read more

Data Migration in Heterogeneous Storage Systems

Data Migration in Heterogeneous Storage Systems

Several approximation algorithms have been developed for data migration in local area networks [5], [11], [7], [12], [13], [14]. In their ground-breaking work, Hall et al. [4] studied the problem of scheduling migrations given a set of disks, with each storing a subset of items and a specified set of migrations. A crucial constraint in their problem is that each disk can participate in only one migration at a time. If both disks and data items are identical, this is exactly the problem of edge-coloring a multi-graph. That is, we can create a transfer graph G(V, E) that has a node corresponding to each disk, and a directed edge corresponding to each migration that is specified. Algorithms for edge-coloring multigraphs can now be applied to produce a migration schedule since each color class represents a matching in the graph that can be scheduled simultaneously. Computing a solution with the minimum number of colors is NP-hard [15], but several approximation algorithms are available for edge coloring [16], [17], [18], [19], [20]. With space constraints on disks, the problem becomes more challenging. Hall et al. [4] showed that with the assumption that each disk has one spare unit of space, very good constant factor approximations can be obtained. The algorithms use at most 4d∆/4e colors with at most n/3 bypass nodes, 1 or at most 6d∆/4e colors without bypass nodes where ∆ is the maximum degree of the transfer graph and n is the number of disks.
Show more

8 Read more

Bayesian Parameter Estimation for the Localization of a Radioactive Source in a Heterogeneous Urban Environment.

Bayesian Parameter Estimation for the Localization of a Radioactive Source in a Heterogeneous Urban Environment.

∼2 m for both cases using a 30 min count time. Analysis also suggested that shorter count times could be used, with count times in excess of 450 s producing little change in the posterior density. Discussion of the results of the experiments in the field led to several further points of investigation. Firstly, it was discovered that during the second set of measurements interference by vehicles moving through the scene caused unaccounted-for variations in the detector count rates. These variations were shown to have little effect on the localization, however as part of this investigation a method was developed to automatically detect and classify various types of anomaly. Second, studies were performed to examine the accuracy and precision of the localization versus the activity of the source relative to background. These studies suggest that the algorithm performs well for signal-to-noise ratio values down to ∼ 0.5. Finally, the last case examined the use of a detector model which only modeled the occlusion of sources by objects in the scene, treating any such objects as fully opaque. This model has the advantage of not requiring estimates of the cross section of interposed objects and, despite some limitations, it was shown that it is still able to achieve useful localization of the source.
Show more

107 Read more

Data Source: SUMMARY AND RECOMMENDATION

Data Source: SUMMARY AND RECOMMENDATION

No representations, express or implied are made by eResearch as to the accuracy, completeness or correctness of its research. Opinions and estimates expressed in its research represent eResearch’s judgment as of the date of its reports and are subject to change without notice and are provided in good faith and without legal responsibility. Its research is not an offer to sell or a solicitation to buy any securities. The securities discussed may not be eligible for sale in all jurisdictions. Neither eResearch nor any person accepts any liability whatsoever for any direct or indirect loss resulting from any use of its research or the information it contains. This report may not be reproduced, distrib- uted or published without the express permission of eResearch.
Show more

12 Read more

An Integrated Approach to Heterogeneous Data for Information Extraction

An Integrated Approach to Heterogeneous Data for Information Extraction

Compared to NER, RDR is still a comparatively hot and difficult topic. Although some statistical RDR systems have been developed, such as the systems participating ACE, they were usually designed only for homogeneous data as most of the existing statistical NER systems. Therefore, most of the previous web text mining systems adopted rule-based approach to extract information (Rosenfeld et al., 2004; Soderland, 1999). Similar to rule-based NER, the main problem of rule-based RDR is the difficulty in designing rules for all kinds of text. In recent years, some studies have been done to learn rules by a semi-supervised or totally unsupervised approach (Mann & Yarowsky, 2003; Mann, 2006; Rosenfeld & Feldman, 2006). These approaches only detect relations existing in a sentence, which is not enough for web data as some relations occur across sentences. Therefore, some patterns specific to web data needs to be learned. Overall, RDR is still on the stage of exploration now.
Show more

10 Read more

Integrating Heterogeneous Data Sources Using XML

Integrating Heterogeneous Data Sources Using XML

There are several well-known research projects and prototypes such as Garlic [2], Tsimmis [3], MedMaker [9], and Mix [10] are structural approaches and take a global-as-view approach. A common data model is used, e.g., OEM (Object Exchange Model) in Tsimmis and MedMaker. Mix uses XML as the data model; an XML query language XMAS was developed and used as the view definition language there. DDXMI (for Distributed Database XML Metadata Interface) builds on XML Metadata Interchange. DDXMI is a master file including database information, XML path information (a path for each node starting from the root), and semantic information about XML elements and attributes. A system prototype has been built that generates a tool to do the metadata integration, producing a master DDXMI file, which is then used to generate queries to local databases from master queries. In this approach local sources were designed according to DTD definitions. Therefore, the integration process is started from the DTD parsing that is associated to each source.
Show more

5 Read more

Obstacle Detection Based on Heterogeneous Data Fusion

Obstacle Detection Based on Heterogeneous Data Fusion

Abstract. Obstacle detection is very important for the autonomous movement of robots. Variety of sensors are used in obstacle detection to obtain information about obstacles. Different types of data obtained by sensors cannot be directly analyzed. This paper proposed an obstacle detection method based on heterogeneous data fusion. An image processing method is adopted to process the image of obstacle obtained by monocular vision sensor, and the size and position of the obstacle can be obtained. Then using infrared distance sensors are used to obtain distance of obstacles. Finally the location and distance information of obstacles can be obtained by a specific algorithm. Two experiments are performed for verifing the effect of proposed method. Experiments shown that the proposed method can effectively obtain the position and distance of obstacles.
Show more

5 Read more

Cleaning heterogeneous data at and deploying it on cloud– A Review

Cleaning heterogeneous data at and deploying it on cloud– A Review

information. It can be facts related to any object inconsideration. For example your name, age, height, weight etc. In computer term, data is referred as information stored on computer system, used by applications and users to achieve tasks. It is used for analysis or used to reason or used for decisions. Data base can be defined by collecting associated information that is stored in an organized manner. It makes data management easy. Database Management System (DBMS) is a collection of programs which enables its users to access database, manipulate data and help in representation of data. For example, an online telephone directory would definitely use database management system to store data relating to people, phone numbers, and other contact details. Electricity service providers is using a DBMS to manage billing, client related issues, to handle fault data. For storing very less data we use flat files like word, excel. But for storing large amount of information we need databases like oracle, SQL Server, MySQL, Foxpro. Data stored at data centers are having certain issues like incorrect data, duplicate data etc, for that data cleaning is necessary. Data cleaning is nothing but abolition of flawed data caused by disagreement, inconsistency, keying mistakes, missing bits, etc used mainly in databases.
Show more

6 Read more

Open Source Data Deduplication

Open Source Data Deduplication

data deduplication is a specialized data compression technique for eliminating coarse-grained redundant data, typically to improve storage utilization.. In the deduplication process, [r]

31 Read more

Secure Data Sharing and Processing in Heterogeneous Clouds

Secure Data Sharing and Processing in Heterogeneous Clouds

The extensive cloud adoption among the European Public Sector Players empowered them to own and operate a range of cloud infrastructures. These deployments vary both in the size and capabilities, as well as in the range of employed technologies and processes. The public sector, however, lacks the necessary technology to enable effective, interoperable and secure integration of a multitude of its computing clouds and services. In this work we focus on the federation of private clouds and the approaches that enable secure data sharing and processing among the collaborating infrastructures and services of public entities. We investigate the aspects of access control, data and security policy languages, as well as cryptographic approaches that enable fine-grained security and data processing in semi-trusted environments. We identify the main challenges and frame the future work that serve as an enabler of interoperability among heterogeneous infrastructures and services. Our goal is to enable both security and legal conformance as well as to facilitate transparency, privacy and effectivity of private cloud federations for the public sector needs. © 2015 The Authors. Published by Elsevier B.V.
Show more

11 Read more

Show all 10000 documents...