Top PDF 6 Methods of data collection and analysis

6 Methods of data collection and analysis

6 Methods of data collection and analysis

transcripts, documents, blogs, surveys, pictures, videos etc. You may have been in the situation where you have carried out 6 focus group discussions but then are not quite sure what to do with the 30 pages of notes you collected during the process. Do you just highlight what seems most relevant or is there a more systematic way of analysing it? Qualitative data analysis typically revolves around the impressions and interpretations of key researchers. However, through facilitation, study participants can also take an active role in identifying key themes emerging from the data. Because qualitative analysis relies on researchers’ impressions, it is vital that qualitative analysis is systematic and that researchers report on their impression in a structured and transparent form. This is particularly important considering the common perception that qualitative research is not as reliable and sound as quantitative research.
Show more

30 Read more

Qualitative Data Collection and Analysis Methods: The INSTINCT Trial

Qualitative Data Collection and Analysis Methods: The INSTINCT Trial

Patient care practices often lag behind current scientific evidence and professional guidelines. The failure of such knowledge translation (KT) efforts may reflect inadequate assessment and management of specific barriers confronting both physicians and patients at the point of treatment level. Effective KT in this setting may benefit from the use of qualitative methods to identify and overcome these barriers. Qualitative meth- odology allows in-depth exploration of the barriers involved in adopting practice change and has been in- frequently used in emergency medicine research. The authors describe the methodology for qualitative analysis within the INcreasing Stroke Treatment through INteractive behavioral Change Tactics (IN- STINCT) trial. This includes processes for valid data collection and reliable analysis of the textual data from focus group and interview transcripts. INSTINCT is a 24-hospital, randomized, controlled study that is designed to evaluate a system-based barrier assessment and interactive educational intervention to increase appropriate tissue plasminogen activator (tPA) use in ischemic stroke. Intervention hospitals undergo baseline barrier assessment using both qualitative as well as quantitative (survey) techniques. In- vestigators obtain data on local barriers to tPA use, as well as information on local attitudes, knowledge, and beliefs regarding acute stroke treatment. Targeted groups at each site include emergency physicians, emergency nurses, neurologists, radiologists, and hospital administrators. Transcript analysis using NVivo7 with a predefined barrier taxonomy is described. This will provide both qualitative insight on thrombolytic use and importance of specific barrier types for each site. The qualitative findings subse- quently direct the form of professional education efforts and system interventions at treatment sites. ACADEMIC EMERGENCY MEDICINE 2007; 14:1064–1071 ª 2007 by the Society for Academic Emergency Medicine
Show more

8 Read more

Data Collection and Analysis

Data Collection and Analysis

• Complete data means that the value of the life time of each item is observed or known. For example, for life data analysis, the data (if complete, which is unusual in field data collection) would comprise the times-to-failure of all units in the field.

15 Read more

A review of research process, data collection and analysis

A review of research process, data collection and analysis

In summary, the research process begins with de ining research problems and then review of literatures, formulation of hypothesis, data collection, analysis, interpretation and end in report writing. There are chances of occurrence of many biases in data collection. Importantly, the analysis of research data should be done with very caution. If a researcher use statistical test for signi icance, he/she should show exact p values. It is also better still, to show con idence limits instead. The standard error of the mean should be shown only in case of estimating population parameter. Usually between- subject standard deviation should be presented to convey the spread between subjects. In population studies, this standard deviation helps convey magnitude of differences or changes in the mean. In interventions, show also the within-subject standard deviation (the typical error) to convey precision of measurement. Standard deviation helps convey magnitude of differences or changes in mean performance.
Show more

6 Read more

Data Collection and Data Analysis in Honeypots and Honeynets

Data Collection and Data Analysis in Honeypots and Honeynets

Location of collector is an important issue in data collection. Usage of secure communication channel is sufficient only for low-interaction and medium- interaction honeypots. For high-interaction honeypots it is needed to use another method of data collection. Collector located on virtual honeynet based on operating-system level virtualization is solution for this type of honeypots. Analysis of data from different honeypots is the second important issue. In paper we propose framework, which adds a new abstract layer – layer of event over all data from honeypots. Events are essential parts of data from each honeypots. Sandia’s taxonomy discusses the taxonomy of incident in general. In paper we focus on taxonomy of incident from the perspective of honeypots and honeynets. We propose new honeypot-incident taxonomy, which is modification of already mentioned Sandia’s taxonomy.
Show more

16 Read more

<p>A narrative review of data collection and analysis guidelines for comparative effectiveness research in chronic pain using patient-reported outcomes and electronic health records</p>

<p>A narrative review of data collection and analysis guidelines for comparative effectiveness research in chronic pain using patient-reported outcomes and electronic health records</p>

The CHOIR software was designed to be flexible so that each site could customize its questionnaires. Early versions of CHOIR allowed for two survey types: initial surveys for new patients and returning patients answering their first CHOIR survey and shorter follow-up surveys for all other appointments. Newer versions of the CHOIR software can create customized surveys for different appointment types such as procedures and psychological evaluations. In 2016, UPMC Pain Medicine implemented the CHOIR system into its eight regional clinics in western Pennsylvania. Using the flexible software architecture, UPMC customized initial and follow-up surveys to include PROMIS measures and survey questions from the original Stanford CHOIR surveys plus new measures and custom site-specific questions. Measures used in the UPMC version of the CHOIR system are italicized in Table 1, and the UPMC data dictionary does not include all of the PROMIS item banks described. More specifically, the UPMC CHOIR surveys have added the PainDETECT
Show more

10 Read more

Analysis of erroneous data entries in paper based and electronic data collection

Analysis of erroneous data entries in paper based and electronic data collection

Data were collected from 362 patients. The EDC database contained a duplication of the unique identifier (CODE) in two cases, accordingly four participants were excluded from both databases, resulting in 358 (98.9%) participants with paired data collected by both systems. Each data set contained 56 variables (Additional file  1: Table  S1). A total of 4 (7.1%) variables were excluded from this analy- sis: the variable “CODE” as this was the linking variable between datasets, “Study Name” (STUDY) since this was autocompleted among both systems, “Patient Initials” (INI) and “Place of Birth” (POB) as these were entered as free text and had to be translated from Devanagari to Roman letters. From the 52 (92.9%) included variables, a total of 3 (5.8%) variables contained dates, 2 (3.8%) vari- ables recorded a specific time (in 24-h format), 10 (19.2%) variables contained continuous data, 35 (67.3%) vari- ables contained categorical data and 2 (3.8%) variables contained text where it was assumed that data collectors would know the correct spelling of all possible answers in Roman letters (“Diagnosis-DIAG” and “Main Place of Residence-MPR”, Additional file 1: Table S1).
Show more

6 Read more

Wind Data Collection and Analysis in Kumasi

Wind Data Collection and Analysis in Kumasi

The writing of this research paper was partially driven by the need to revive the collection of climatic data initiated in 1993 but stopped in 2004 by the Solar Energy Application Laboratory (SEAL) of the Mechanical Engineering Department of KNUST. Weather data sets were collected by SEAL by employing weather monitoring equipment such as a propeller anemometer, radiometers for both global and diffuse irradiation, air-temperature/ relative humidity sensor and a rain gauge which were all manufactured by Kipp and Zonen. The climatic data collected by SEAL was obtained at a height of about 7 m. An annual average wind speed of about 1.5 m/s was recorded at the project site (the roof top of the building housing SEAL). This paper collected real wind data on two principal characteristics of wind namely wind speed and wind direction at a recording site which was located on top of the new classroom block of College of Engineering (COE) on KNUST campus at a height of 20 m.
Show more

12 Read more

Improving Driver Safety through Naturalistic Data Collection and Analysis Methods

Improving Driver Safety through Naturalistic Data Collection and Analysis Methods

After the 100 car study a number of additional naturalistic studies were performed through partnerships with organizations such as the Federal Motor Carrier Administration [9], the National Highway Safety Administration, the National Institutes of Health (NIH), and the Crash Avoidance Metrics Partnership [5]. This rapid database expansion increased storage demands to over 100 terabytes and highlighted the need to improve the compression and access speed performance. These studies fueled a transition to dedicated standard (video) and high- speed (parametric data) storage server racks attached to database management systems based on the structured query language (SQL). An independent local area network (LAN) protects from outside intruders while the array of independent storage disks (RAID) and routine offsite tape backup protect data from hardware failures. Although a 16 processor windows computing cluster is used for the analysis of large jobs, most analysis is performed on local machines.
Show more

8 Read more

An Analysis of the Plastic Waste Collection and Wealth Linkages in Ghana

An Analysis of the Plastic Waste Collection and Wealth Linkages in Ghana

Considering the self evident nature of the plastic waste situation in the Kumasi metropolis, visits to suburbs and public places were undertaken to have an in-depth knowledge and understanding of the situation. The research reviewed extensive literature on plastic waste collection and recycling. Apart from the use of documented sources, the study also generated first hand information from the field. Purposive and simple random sampling techniques were also utilized in the study to select interviewees. The study relied on qualitative and quantitative approach, taken in to consideration, sources of data, sampling techniques, data collection techniques, as well as data analysis and presentation techniques. Using purposive sampling technique, in-depth interviews was organized with waste pickers, officials of plastic waste recycling companies, and to managers of solid waste dumps. Officials from the Ministries of Local Government and Rural Development (MLGRD) and Environment; the Environmental Protection Agency (EPA) and Regional Coordinating Council (RCC), Kumasi Metropolitan Assembly (KMA) were also interviewed. The interviews covered themes on the quantity of plastic waste they are able to pick, the price value and their role in solid waste management as well as their detailed knowledge about the enterprise. Additionally, the study used focus group discussions (FGDs) as its primary data collection technique among the scavengers/waste pickers, recycling plant workers and other interested identifiable parties.
Show more

5 Read more

Feasibility and effectiveness of thoracic spine mobilization on sympathetic/parasympathetic balance in a healthy population - a randomized controlled double-blinded pilot study

Feasibility and effectiveness of thoracic spine mobilization on sympathetic/parasympathetic balance in a healthy population - a randomized controlled double-blinded pilot study

Background: Physiotherapists often use thoracic spine mobilization (TSM) to reduce pain in patients with back disorders via a reduction of sympathetic activity. There is a “ trade-off ” in the activity of the sympathetic and parasympathetic nervous system activity. A sympathetic/parasympathetic balance (SPB) is needed to guarantee body homeostasis. However, body homeostasis is seldom considered as an aim of the treatment from the perspective of most physiotherapists. Strong empirical evidence for the effects of TSM on the SPB is still lacking. Some studies showed that spinal manipulation may yield beneficial effects on SPB. Therefore, it could be hypothesized that TSM is feasible and could influence SPB reactions. The primary aim was to describe the participants ’ adherence to the intervention and to the measurement protocol, to identify unexpected adverse events (UAE) after TSM, to evaluate the best method to measure SPB parameters (heart rate variability (HRV), blood pressure (BP), heart rate (HR), skin perfusion and erythema) and to estimate the investigation procedure. The secondary aim was to assess the effects of TSM on SPB parameters in a small sample of healthy participants. Methods: This crossover pilot study investigated TSM using posterior-anterior mobilization (PAM) and anterior- posterior mobilization (APM) on segments T6 to T12 in twelve healthy participants during two consecutive days. To evaluate feasibility, the following outcomes were assessed: adherence, UAE, data collection and data analysis. To evaluate the effect of TSM on SPB, HRV, BP, HR, skin perfusion and erythema were measured.
Show more

9 Read more

The Aging, Community and Health Research Unit—Community Partnership Program for older adults with type 2 diabetes and multiple chronic conditions: a feasibility study

The Aging, Community and Health Research Unit—Community Partnership Program for older adults with type 2 diabetes and multiple chronic conditions: a feasibility study

Finally, we examined feedback from the research assis- tants and researchers on the data collection and analysis procedures. Table 7 (Appendix) provides the feedback from Research Assistants and researcher review of the response data and feedback from the focus group ses- sions. Research assistants indicated that the length of time for the baseline and 6-month interviews was 1.5– 2 h and 1.5 h, respectively. They reported that the MoCA was time consuming, and the researcher team questioned the appropriateness of this instrument, given its limited role in the study (validating informed con- sent). Providers reported that falls were a concern for many participants and suggested that a falls assessment might be appropriate. Several concerns arose with ad- ministering the SDSCA, including its face validity with clients, the inapplicability of several questions (e.g., glu- cose monitoring, medications), and its potential for mis- interpretation (which resulted in the removal of one client from the analysis). Regarding HbA1C measures, the main concerns were relevance (e.g., appropriateness relative to other measures, consistency with participant values/goals) and ensuring that the timing was more precise relative to the baseline and 6-month bench- marks. Researchers also felt that monthly receipt of the program documentation was too infrequent to facilitate tracking. The last column of (Appendix) Table 7 maps the issues raised to suggested changes to the data ana- lysis and collection methods for the RCT (see “Discus- sion” section below).
Show more

23 Read more

Relevance assessment of crowdsourced data (CSD) using semantics and geographic information retrieval (GIR) techniques

Relevance assessment of crowdsourced data (CSD) using semantics and geographic information retrieval (GIR) techniques

The quality of geospatial data has long been considered in the field of geospatial information management, where assessment parameters and techniques are often defined [10]. However, CSD does not follow standard data-collection procedures nor is the data generated by skilled geospatial professionals. Therefore, CSD often does not have a clear data structure or metadata and so the application of traditional spatial data-quality assessment parameters and techniques may be problematic. Researchers are therefore exploring new parameters and methods for CSD quality assessment and have identified credibility and relevance as possible quality indicators [10–16]. Choosing the most relevant geospatial information is important if high-quality outcomes are expected in geospatial data dependent applications, as not all CSD may be related or relevant to the task at hand. Data that is not relevant or has a low relevance is of limited use for applications such as emergency management. In large datasets, data that is of low relevance may exist and, therefore, relevance analysis of CSD is important prior to utilizing this data in applications that require relevant and trustworthy data.
Show more

18 Read more

Evaluation of the Accuracy and Automation of Travel Time and Delay Data Collection Methods

Evaluation of the Accuracy and Automation of Travel Time and Delay Data Collection Methods

Introduction of GPS into the public sector provided a significant advancement in the active test vehicle data collection. In recent years, a number of applications for GPS technology have led to innovative methodologies that have direct and indirect relevance to travel time data collection. Many of these experiments offer new ways of organizing and automating the data collection procedure. In one such study, Hunter developed the Travel Run In- tersection Passing Time Identification (TRIPTI) algo- rithm [11] for the collection and analysis of GPS-based travel time data. The TRIPTI algorithm first checks each data point location against the known location of each intersection to determine which intersections were tra- versed. Then the algorithm determines the crossing time of the data point nearest the exiting reference line.
Show more

12 Read more

Hierarchy And Interrelationships Of Research Methodological Concepts

Hierarchy And Interrelationships Of Research Methodological Concepts

It is taking stalk of the whole process (assessing the journey after arrival) of data collection. Some figurative questions asked include: How was the journey? Arrived safely? Missed some steps on the way? It ranges from more abstract to more tangible. It is conceptualized in the mind, presented in words, diagrams, models and print outs. According to Cauvery, Nayak, Girija, Meenakshi and Chand (2007) “data processing consists of editing, coding and tabulation” (p.187). In processing the researcher tries to organize and classify data in preparation for analysis. In analysis the researcher tries to organize data so as to respond to research problem. Thomas and Hodges (2010), recapitulates “in broad terms, data analysis is the process of drawing meaning from or making sense of the information or evidence collected from the project” (p.23).
Show more

5 Read more

PubMedCentral-PMC5710985.pdf

PubMedCentral-PMC5710985.pdf

It was the consensus of the Brighton Collaboration Gestational Diabetes Mellitus Working Group to recommend the following guidelines to enable meaningful and standardized collection, anal- ysis, and presentation of information about gestational diabetes. However, implementation of all guidelines might not be possible in all settings. The availability of information may vary depending upon resources, geographical region, and whether the source of information is a prospective clinical trial, a post-marketing surveil- lance or epidemiological study, or an individual report of gesta- tional diabetes. Also, as explained in more detail in the overview paper in this volume, these guidelines have been developed by this working group for guidance only, and are not to be considered a mandatory requirement for data collection, analysis, or presentation.
Show more

8 Read more

The Effects of Data Collection Methods in Twitter

The Effects of Data Collection Methods in Twitter

There have been recent efforts to use social media to estimate demographic characteris- tics, such as age, gender or income, but there has been little work on investigating the ef- fect of data acquisition methods on produc- ing these estimates. In this paper, we compare four different Twitter data acquisition methods and explore their effects on the prediction of one particular demographic characteristic: oc- cupation (or profession). We present a com- parative analysis of the four data acquisition methods in the context of estimating occupa- tion statistics for Australia. Our results show that the social network-based data collection method seems to perform the best. However, we note that each different data collection ap- proach has its own benefits and limitations.
Show more

6 Read more

Choosing Methods and Tools for Data Collection

Choosing Methods and Tools for Data Collection

A sample survey is a quantitative data collection method that can be used to collect information on any number of topics. Common techniques used in sample surveys include measurement techniques such as anthropometric (nutritional measures of children) surveys and interviewing techniques (e.g, asking the respondent how many meals he or she has eaten in the last week, and what foods he or she ate). Surveys employing interviewing techniques most often utilise closed-ended questions listed in questionnaires that are uniformly applied to each respondent. The intent is to gather data to test a pre-determined hypothesis and only answers to those questions/variables included in the questionnaire are collected. This eases analysis, but limits the degree to which respondents participate and are able to provide explanations on what they perceive (causes, rationale). Rather, explanations are sought by comparing associations and potentially causal relationships between variables (e.g. diarrhoea prevalence is lower among children whose primary drinking water source is a borehole, therefore the lower prevalence is explained by the source of water).
Show more

31 Read more

Classification of data collection methods (= Deliverable 3 1 of the OrganicDataNetwork project   Report on collection methods)

Classification of data collection methods (= Deliverable 3 1 of the OrganicDataNetwork project Report on collection methods)

compilation methods and the evaluation of these methods. The survey results on all existing organic market data collection methods in Europe, which were collected and compiled in an online survey among European organic market data collectors, is classified according to predefined criteria. This is the prerequisite for the overall evaluation of existing data collection methods, including consistency and comprehensiveness, as well as the assessment of data quality. These results were intensively discussed and complemented by interested national data collectors in the first project workshop. Furthermore, the compatibility of the existing data collection methods in Europe is analysed to answer the question of whether and how the different national data can be merged into European statistics. Eurostat currently compiles statistics on organic data, such as area data, operator data, livestock numbers, primary crop production, and livestock products volumes. Although Eurostat has the most elaborate system of transnational data collection related to the organic market, their current data collection does not take data on retail volumes and values, or import and export volumes and values for important organic agricultural products into account. This was one of the suggestions made by the former EU project consortium of the concerted action ‘European Information System for Organic Markets’ (EISfOM) (Rippin et al., 2006). In addition to the EISfOM project, several other EU projects have dealt with related issues: OFCAP (Häring and Dabbert, 2000), OMIaRD (Hamm and Gronefeld, 2004), and EU-CEEOFP (Stolze and Lampkin, 2005). Together they have published a number of reports on the development of the EU organic sector and thus have helped to develop a framework for reporting valid and reliable data. The EISfOM project suggested the introduction of legal requirements, committing member states to provide data. Since the implementation of the revised regulation on organic farming, more data has become available and can be accessed more easily (Eurostat, 2010). Nevertheless, more coherent data collection and thorough data analysis are needed to overcome current dispersion and fragmentation of data sources. So far only few countries publish consistent official statistics. Hence there is no common and holistic approach making sound decision-making in the European organic sector possible (Rippin et al., 2006). Building on this stated problem, the current EU project (OrganicDataNetwork) identifies the gaps in data bases and tries to bridge them.
Show more

34 Read more

Instrument development, data collection and characteristics of practices, staff and measures in the Improving Quality of Care in Diabetes (iQuaD) study

Instrument development, data collection and characteristics of practices, staff and measures in the Improving Quality of Care in Diabetes (iQuaD) study

Whilst the organisational measures were standard questionnaires (and achieved expected levels of internal consistency), our operationalisation of the individual cognition measures was good with measures of internal consistency all well within accepted ranges and good content coverage of the constructs. Many of the indivi- dual cognition scores are high, suggesting that respon- dents are already positively inclined towards performing the behaviours. These two groups of measures will together form a large part of our explanatory variables in explaining variation in rates of performing the beha- viours. A standard analysis would calculate the variance in behaviour explained by each measure but, under cir- cumstances such as these (where values are very posi- tive), it is possible that contextual and environmental factors are important in whether or not the behaviours are successfully performed. Given the range of such fac- tors that we have measured, we will be able to perform a more comprehensive analysis to generate hypotheses about where it might be best to intervene to improve performance.
Show more

22 Read more

Show all 10000 documents...