Medical Databases

Top PDF Medical Databases:

USE OF MEDICAL DATABASES BY THE FACULTY AND RESEARCHERS OF SPEECH AND HEARING INSTITUTIONS IN INDIA: A STUDY

USE OF MEDICAL DATABASES BY THE FACULTY AND RESEARCHERS OF SPEECH AND HEARING INSTITUTIONS IN INDIA: A STUDY

Table 10 depicts the response of the faculty and research scholars on top ten useful online medical databases. Among the respondents, 85 representing 29.8% have felt that PubMed is found most useful. Followed by 47(16.4%) respondents which includes 37(17.6%) faculty and 10(13.3%) research scholars have mentioned that Science Direct is most useful. Followed by 37(12.9%) respondents who have indicated that N-List of INFLIBNET is useful which is suggested by 27(12.8%) faculty and 10(13.3%) research scholars. 29(10.2%) respondents most frequently make use EBSCO which is recommended by 26(12.3%) faculty and 3(4.0%) research scholars. 29(10.2%) respondents are using RESEARCHGATE which is indicated by 28(13.3%) faculty and 01(1.3%) research scholars. Further 24(8.4%) of respondents are relying upon MEDLINE which is indicated by19 (9.0%) faculty and 05(6.6%) research scholars. Another segment of respondents covering 21(7.4%) have mentioned that use of e-shodhsindhu of INFLIBNET is high which includes 20(9.5%) faculty and 01(1.3%) research scholars. 05(1.8%) respondents are relying upon NeuroScience Information Framework which is indicated by 03(1.4%) faculty and 02(2.6%) research scholars. Finally, 04(1.4%) respondents felt that Natural Medicines Comprehensive database and Biomed Central database are quite useful which is highlighted by 03(1.4%) faculty and 01(1.3%) research scholar. Thus, among the online databases, PubMed and Science Direct are being extensively used by the research scholars. The use of e-resources by the research scholars is high when compared to the faculty members. Biomed Central and NeuroScience Information Framework are not being used by large majority of the users comprising of faculty and research scholars. Cramer’s V value was observed (CV=.261; p=.013), confirming that majority of the respondents are using these top ten e-databases to a higher range.
Show more

15 Read more

Medical databases in studies of drug teratogenicity: methodological issues

Medical databases in studies of drug teratogenicity: methodological issues

Since birth defects are rare, assembling cohorts to observe their occurrence is expensive in terms of time, money, and resources, leading to widespread use of the case-control design. Case-control studies, often based on interviews or questionnaires, are susceptible to selection and recall bias, and they do not allow estimation of abso- lute risks (prevalences) of birth defects. Existing medical databases are increasingly being used to conduct pharmacoepidemiologic cohort studies, including studies of drug teratogenicity. 8 Medical databases, some of which have been in existence
Show more

7 Read more

Application of data mining techniques for outlier mining in medical databases

Application of data mining techniques for outlier mining in medical databases

information for medical researchers and doctors. In fact, the study with medical data by using the DM techniques is virtually an unexplored frontier which needs extraordinary attention. The results of the present investigation suggest that (i)the extraordinary behavior of outliers facilitates the exploration of the valuable knowledge hidden in their domain and help the decision makers to provide improved, reliable and efficient healthcare services (ii)medical doctors can use the present experimental results as a tool to make sensible predictions of the vast medical databases and finally(iii)a thorough understanding of the complex relationships that appear with regard to patient symptoms, diagnoses and behavior is the most promising area of outlier mining. In order to carry out this experiment on outlier detection, the Pima data set was used in the simulation carried out by TANAGRA. A total of 193 outliers have been detected for the statistics namely leverage, R-standard, R-student, DFFITS, Cooks’D and covariance ratio.
Show more

6 Read more

An Empirical Comparison of Data Mining Techniques in Medical Databases

An Empirical Comparison of Data Mining Techniques in Medical Databases

This paper has conducted a comparison between four data mining techniques namely, NBT, Ridor, NB and J48 relies on the careful KDD steps could be used to classification in medical databases. The performance of the techniques is validated by recall, precision and accuracy values. The NBT technique show better performance for our medical databases (100%), but J48 and Ridor are also useful and may be better fit to deal with our case.

5 Read more

OUTLIER MINING IN MEDICAL DATABASES BY USING STATISTICAL METHODS

OUTLIER MINING IN MEDICAL DATABASES BY USING STATISTICAL METHODS

Outlier detection in the medical and public health domains typically works with patient records and is a very critical problem. This paper elaborates how the outliers can be detected by using statistical methods. A total of 78, 67, 82, 78 and 69 outliers in five medical datasets are detected for the statistics namely leverage, R-standard, R-student, DFFITS, Cook’s D and covariance ratio. The results of the present investigation suggest that (i) the extraordinary behavior of outliers facilitates the exploration of the valuable knowledge hidden in their domain and help the decision makers to provide improved, reliable and efficient healthcare services (ii) medical doctors can use the present experimental results as a tool to make sensible predictions of the vast medical databases and finally (iii)a thorough understanding of the complex relationships that appear with regard to patient symptoms, diagnoses and behavior is the most promising area of outlier mining.
Show more

8 Read more

A Study on the Feasibility of Subject Authority Control of Web-based Persian Medical Databases: An Iranian Experience

A Study on the Feasibility of Subject Authority Control of Web-based Persian Medical Databases: An Iranian Experience

One of the most important factors challenging the issue of "information storage and retrieval" in the Internet environment is the lack of control on authorities, i.e. subject authority control. The present research aims at examining the feasibility of subject authority control of Persian medical databases available on the Net. Based on research methodology, we have randomly chosen 50 keywords utilized by users searching databases for articles. In the pre-test stage, these keywords were searched through Iranmedex, a database for Persian medical articles. Comparing them with Persian medical thesaurus, those keywords exactly matched to the thesaurus words were entered in a designed database using Microsoft Access software. Then, we entered these authorized keywords in Iranmedex. Findings of new search sessions revealed that control of authorities on the one hand, makes information retrieval more precise and accurate and, on the other prevents false drops. The research findings can be used for modifying the process of information storage and retrieval on the Internet. The research concludes with a model for applying thesauruses as authority control tools for other databases available at Internet.
Show more

14 Read more

Improvement of the quality of medical databases: data-mining-based prediction of diagnostic codes from previous patient codes

Improvement of the quality of medical databases: data-mining-based prediction of diagnostic codes from previous patient codes

Introduction. Diagnoses and medical procedures collected under the French system of information are recorded in a nationwide database, the “PMSI national database”, which is accessible for exploitation. Quality of the data in this database is directly related to the quality of coding, which can be of poor quality. Among the proposed methods for the exploitation of health databases, data mining techniques are particularly interesting. Our objective is to build sequential rules for missing diagnoses prediction by data mining of the PMSI national database. Method. Our working sample was constructed from the national database for years 2007 to 2010. The information retained for rules construction were medical diagnoses and medical procedures. The rules were selected using a statistical filter, and selected rules were validated by case review based on medical letters, which enabled to estimate the improvement of diagnoses recoding.
Show more

5 Read more

An overview of pharmacoepidemiology in New Zealand: medical databases, registries and research achievements

An overview of pharmacoepidemiology in New Zealand: medical databases, registries and research achievements

11. Winnard D, Wright C, Taylor WJ, et al. National prevalence of gout derived from administrative health data in Aotearoa New Zealand. Rheumatolo- gy. 2012; 51:901–9:10.1093/ rheumatology/ker361. 12. Winnard D, Wright C, Jackson G, et al. Gout, diabetes and cardiovascu- lar disease in the Aotearoa New Zealand adult population: co-prevalence and implications for clinical practice. The New Zealand Medical Journal (Online). 2013; 126. 13. Blank M-L, Parkin L, Paul

10 Read more

Comparison of distance metrics for hierarchical data in medical databases

Comparison of distance metrics for hierarchical data in medical databases

metrics discriminate sufficiently using clustering the patient population. Fig. 6(a), Fig. 6(b), Fig. 6(c) and Fig. 6(d) show the results of clustering using the k-means algorithm. The latter is the simplest clustering method and requires the number of clusters to be known in advance. In this work, we chose the number of clusters to be equal to 3. However, more proper data analysis is required for future work and more than three clusters might be considered. The clusters have been plotted using the clusplot function from R software which is representing all the observations by points in the plots using the principal component analysis [15]. PCA is used in the data set for the purpose of visualisation and no feature selection has been carried out. The clusters are labeled using numbers (1, 2 and 3) as shown in Fig. 6 and the geometric and Hamming metrics discriminate successfully on the population for both drugs. We chose only two figures for each drug, one for each group of metrics. Table VI and Table VII show the number of patients in each cluster. The patients in Table III are grouped in cluster1 for all the metrics used, while the patients in Table V are grouped in cluster1 for the geometric and Hamming distance metrics only. In contrast, the distances for the same patients using pq-gram and Edit Distance metrics have a very poor similarity. Thus cluster1 for the both metrics contains some similar distances other than those in Table V. The reason behind that probably is the lack of data related to the second drug. In general, cluster1 in Table VI and Table VII contains the similar patients who have all or some medical events related to the drug in common, while cluster2 contains the non-similar ones. All the other patients who are not in cluster1 or cluster2 are grouped in cluster3 as shown in Fig. 6(a), Fig. 6(b), Fig. 6(c) and Fig. 6(d).
Show more

8 Read more

A New Technology for Smooth Migration from Traditional Databases to Unstructured Databases

A New Technology for Smooth Migration from Traditional Databases to Unstructured Databases

Upper-layer applications of M2M are mainly divided into two categories: online and history-data application system. The former receives real-time working condition data delivered directly from the load-balanced network on the M2M platform and stores a small amount of data in its database, based on which it provides service to its upper layers. This system is not affected by the migration of M2M business databases. The history-data application system needs to access the M2M business database and M2M data warehouse, including querying the working condition data and assessing the data of remote maintenance system and operating decisions system. Because the working condition data can be written into the LaUD big-data system, the necessity of written the working condition data into the Oracle database is eliminated through modifying the working condition data query system, remote maintenance system and operating decisions system, thus realizing a “true” migration.
Show more

5 Read more

Glycoproteomic and glycomic databases

Glycoproteomic and glycomic databases

Protein glycosylation serves critical roles in the cellular and biological processes of many organisms. Aberrant glycosylation has been associated with many illnesses such as hereditary and chronic diseases like cancer, cardiovascular diseases, neurological disorders, and immunological disorders. Emerging mass spectrometry (MS) technologies that enable the high-throughput identification of glycoproteins and glycans have accelerated the analysis and made possible the creation of dynamic and expanding databases. Although glycosylation-related databases have been established by many laboratories and institutions, they are not yet widely known in the community. Our study reviews 15 different publicly available databases and identifies their key elements so that users can identify the most applicable platform for their analytical needs. These databases include biological information on the experimentally identified glycans and glycopeptides from various cells and organisms such as human, rat, mouse, fly and zebrafish. The features of these databases - 7 for glycoproteomic data, 6 for glycomic data, and 2 for glycan binding proteins are summarized including the enrichment techniques that are used for glycoproteome and glycan identification. Furthermore databases such as Unipep, GlycoFly, GlycoFish recently established by our group are introduced. The unique features of each database, such as the analytical methods used and bioinformatical tools available are summarized. This information will be a valuable resource for the glycobiology community as it presents the analytical methods and glycosylation related databases together in one compendium. It will also represent a step towards the desired long term goal of integrating the different databases of glycosylation in order to characterize and categorize glycoproteins and glycans better for biomedical research.
Show more

10 Read more

Securing Patient Information in Medical
Databases

Securing Patient Information in Medical Databases

A growing trend in hospitals is digitalisation, where documents containing sensitive patient information are stored digitally. This raises concerns about the security of the documents being stored, compared to storing the documents on paper, such as the confidentiality and integrity of the documents. Confidentiality of patient information was an issue even when it was still stored on paper. A malicious person could enter the hospital and steal the paper documents. Moreover, hospital staff can read paper documents they are not supposed to read, as long as they have physical access to the document. Confidentiality of paper documents is enforced by putting them behind locks. Basically, it comes down to enforcing physical access control to the document. When a document is stored digitally, similar issues arise. There is still the issue of access control, namely the question of who is allowed to read or change the document, and there is the issue of easy copying. It is much easier to copy a digital document than it is to copy a document on paper. An adversary who gains access to a workstation in the hospital, can copy documents from that workstation. This is a problem if those documents contain patient information. The main difference with paper documents is that an adversary does not need to be in the hospital itself. Hacking a workstation, located in the hospital, which is connected to the internet is sufficient to gain access to patient information. In the past, hackers have obtained access to servers with medical data on it by, for example, hacking a server [37, 18] or by stealing a laptop with patient records of millions of patients [13]. With regards to integrity, it is generally easier to change a digital document, than to change a paper document, as changes on paper are more noticeable and physical access is required to be able to make the change.
Show more

81 Read more

Databases in Clinical Research. Harvard-MIT Division of Health Sciences and Technology HST.951J: Medical Decision Support

Databases in Clinical Research. Harvard-MIT Division of Health Sciences and Technology HST.951J: Medical Decision Support

• Artificial Neural Networks are non-linear mathematical models which incorporate a layer of hidden “nodes” connected to the input layer (covariates) and the output. Output Layer Inp[r]

66 Read more

Special Topics in Security and Privacy of Medical Information. Privacy HIPAA. Sujata Garera. HIPAA Anonymity Hippocratic databases.

Special Topics in Security and Privacy of Medical Information. Privacy HIPAA. Sujata Garera. HIPAA Anonymity Hippocratic databases.

Physical safeguards to guard data integrity, confidentiality, and availability. These safeguards protect physical computer systems, medical documents and related buildings and equipment from intrusion. The use of locks, keys, and administrative measures used to control access to computer systems and facilities are also included.

11 Read more

XML enabled databases. Non relational databases. Guido Rotondi

XML enabled databases. Non relational databases. Guido Rotondi

Students will be introduced to the appropriate use of technology for managing the ETL processes resulting from collecting and feeding data from large structured and unstru[r]

6 Read more

Integrity Coded Databases: Ensuring Correctness and Freshness of Outsourced Databases

Integrity Coded Databases: Ensuring Correctness and Freshness of Outsourced Databases

ICDB Scheme choices based on Cloud Services Schemes: If the database size and the number of transactions for the database will never exceed the maximum limit offered by the cloud service package, then all ICDB schemes can be considered. In this case, if the database application requires integrity protection level down to each data field, then OCF schemes will be the choice. Furthermore, if the database application requires the database server to perform homomorphic operations on behalf of the data owner, then the ICDB schemes using the RSA algorithm will be the choice. For cloud services that charge only for outbound data, the ICDB schemes in the AV mode of DMV model is the best choice since the outbound data is almost the same as a standard SQL database with just additional data for serial numbers. This will keep the cost similar to standard SQL databases.
Show more

165 Read more

In-Memory Databases

In-Memory Databases

The goal of this thesis was to introduce in-memory databases, techniques that make the in-memory databases the full-featured systems suitable to use in the enterprise environment. I summarized the main types and concepts of database systems to provide the base ground to study the in-memory databases. I described the memory hierarchy for better understanding of how the data are stored in the in-memory databases and how the speed of data processing is dependent by their location in the hierarchy. I provided the basic description of the row-based and the column-based data layouts and the operations suitable each of these layouts together with the description of compression techniques applicable mostly on column-based data layout. In the next chapter I described how the in-memory databases deal with the data persistence and data updates. I provided the overview of the in-memory databases and summarized their level of maturity as the in-memory databases.
Show more

44 Read more

Analysis of Distributed Databases

Analysis of Distributed Databases

A distributed database is a database in which storage devices are not all attached to a common processing unit such as the CPU, controlled by a distributed database management system (together sometimes called a distributed database system). It may be stored in multiple computers, located in the same physical location; or may be dispersed over a network of interconnected computers. Unlike parallel systems, in which the processors are tightly coupled and constitute a single database system, a distributed database system consists of loosely-coupled sites that share no physical components. System administrators can distribute collections of data (e.g. in a database) across multiple physical locations. A distributed database can reside on network servers on the Internet, on corporate intranets or extranets, or on other company networks. Because they store data across multiple computers, distributed databases can improve performance at end-user worksites by allowing transactions to be processed on many machines, instead of being limited to one. Two processes ensure that the distributed databases remain up-to-date and current: replication and duplication.
Show more

6 Read more

Querying Large Databases

Querying Large Databases

All around us, we hear that we are living in the world of “big data.” What challenges does the world of big data bring? In the modern world, we all rely on vast databases to quickly and accurately retrieve our data for us in many areas. As datasets grow in size, querying these datasets quickly be- comes cost-prohibitive. There is a trade-off between hard- ware cost and time it takes to complete a query, and even if we assume an infinite budget for hardware, modern tech- nology has its limits. A linear search through a dataset–a search which takes a linear amount of time relative to how large the dataset is–that’s a petabyte in size would take days with a modern solid state drive [3]. It’s this challenge that this paper addresses: how does one query a large database in a reasonable amount of time given limited resources?
Show more

8 Read more

Semantic Deduplication in Databases

Semantic Deduplication in Databases

Despite the fact that different deduplication methods have been proposed and utilized, no single best arrangement has been produced to deal with a wide range of redundancies. The objective of this work is to improve the storage utilization in databases. From the experiments using the methods: LSH, bloom filter and semantic analysis improve the efficiency of the system. Chunk level deduplication divides data into chunks and bloom filter it reduces the search time. It can provide efficient deduplication in databases. The test results demonstrate that framework can achieve reduction in storage size and overall search time. From the observation, it could be concluded that the presence of bloom filter and semantic analysis greatly improves the system performance. The performance of the system can be improved by adding more hash functions in future.
Show more

5 Read more

Show all 10000 documents...