Top PDF Choosing Methods and Tools for Data Collection

Choosing Methods and Tools for Data Collection

Choosing Methods and Tools for Data Collection

However, primary data collection is sometimes warranted. Although a review of secondary data sources should precede any primary data collection, existing data do not always provide the ap- propriate indicators or the appropriate disaggregation of indicators needed to monitor and eval- uate WFP operations effectively. Even secondary data that provides the appropriate indicators and disaggregation of indicators may not be useful if the data is out of date and the situation is likely to have changed since they were collected. This varies greatly according to the indicator for which the data is being collected and its volatility. For example, school enrolment data that is 1 year old may suffice for establishing baseline conditions prior to a school feeding programme, but acute nutritional data (wasting) that is only a month old may no longer represent an accur- ate estimate of current conditions for that indicator.
Show more

31 Read more

Data collection tools for maternal and child health in humanitarian emergencies: a systematic review

Data collection tools for maternal and child health in humanitarian emergencies: a systematic review

To plan and evaluate interventions and actions that will save lives in hu- manitarian emergencies, appropriate data are needed. To ensure that tools used to obtain such data are easy to use and comprehensive, it is essential that both individuals involved in field opera- tions and in operations research contin- ue to work together. New standardized tools should be developed and existing ones adapted based upon standards for data collection in emergencies with in- puts from humanitarian agencies. 111 This

24 Read more

Understanding cost data collection tools to improve economic evaluations of health interventions

Understanding cost data collection tools to improve economic evaluations of health interventions

Micro- costing data collection tools often used in literature include standardized comprehensive templates, targeted questionnaires, activity logs, on- site administrative databases, and direct observation. These tools are not mutually exclusive and are often used in combination. Each tool has unique merits and limitations, and some may be more applicable than others under different circumstances. Proper application of micro- costing tools can produce quality cost estimates and enhance the usefulness of economic evaluations to inform resource allocation decisions. A common method to derive both fixed and variable costs of an intervention involves collecting data from the bottom up for each resource consumed (micro- costing). We scanned economic evaluation literature published in 2008-2018 and identified micro- costing data collection tools used. We categorized the identified tools and discuss their practical applications in an example study of health interventions, including their potential strengths and weaknesses. Sound economic evaluations of health interventions provide valuable information for justifying resource allocation decisions, planning for implementation, and enhancing the sustainability of the interventions. However, the quality of intervention cost estimates is seldom addressed in the literature. Reliable cost data forms the foundation of economic evaluations, and without reliable estimates, evaluation results, such as cost- effectiveness measures, could be misleading. In this project, we identified data collection tools often used to obtain reliable data for estimating costs of interventions that prevent and manage chronic conditions and considered practical applications to promote their use.
Show more

9 Read more

The Toronto prehospital hypertonic resuscitation head injury and multi organ dysfunction trial (TOPHR HIT)   Methods and data collection tools

The Toronto prehospital hypertonic resuscitation head injury and multi organ dysfunction trial (TOPHR HIT) Methods and data collection tools

The application of rigorous clinical trial methodology in the prehospital setting requires overcoming a number of challenges unique to the setting. We conducted a feasibil- ity study prior to implementing a larger randomized clin- ical efficacy trial. This feasibility study is significantly different from other prehospital trauma trials because it is designed by investigators from multiple disciplines including immunology, neurology, surgery, anaesthesia, neuropsychology and neuroimaging to address aspects of TBI not evaluated previously and employs a randomized clinical trial design and waiver of consent. In addition to the primary study objective of survival at 30 days we plan to evaluate circulating and cellular immunomodulation within the first 24 hours, neurocognitive and neuropsy- chological testing at 4 and 12 months, and structural dam- age through MRI scanning up to 4 months post trauma. We report the methods in detail and have appended the data collection tools and case report forms for this feasi- bility study. All case report forms and data collection
Show more

9 Read more

Designing Effective, Scalable Data Collection Tools to Measure Farmers Market Impacts

Designing Effective, Scalable Data Collection Tools to Measure Farmers Market Impacts

A range of farmers market–focused data collection methodologies have been used since the late 1990s, each adding to market organizations’ efforts to measure their internal and external impacts in a consistent, comparable manner. At least three were designed expressly with farmers markets in mind: Rapid Market Assessment (RMA), Sticky Eco- nomic Evaluation Device (SEED), and FM Tracks. Each of these systems greatly expanded the poten- tial for data collection within market organizations by adding new collection, entry, or reporting func- tions. However, the limited use of these tools by markets over the past decade (as evinced by the continued requests for technical assistance around evaluation and the lack of reports from markets using this data) was noted by FMC and its research partners. In response, FMC added evaluation func- tions beyond what the tools listed below had already offered, as outlined Table 1 and described in the next sections. The functionality that Metrics added to each approach is in a bulleted subsection. Rapid Market Assessment
Show more

17 Read more

The Honeynet Project: Data Collection Tools, Infrastructure, Archives and Analysis

The Honeynet Project: Data Collection Tools, Infrastructure, Archives and Analysis

This time the license agreement to host a GDH node will allow the data produced to be made available to all Honeynet Project members (and selected external partner organizations). This will allow the Honeynet Project to maintain a larger incident response team, and hopefully to publish more interesting and timely research. We also aim to substantially increase our sensor deployment installation footprint by evaluating both bootable and embedded Linux nodes for light-weight Nepenthes sensors or OpenVPN gateways to a central honeyfarm. We also plan to work with groups such as Shadowserver to “outsource” some elements of our data processing to organizations with existing significant resources in areas that would otherwise require further internal development.
Show more

7 Read more

View of Conducting a Large Public Health Data Collection Project in Uganda: Methods, Tools, and Lessons Learned
							| Journal of Research Practice

View of Conducting a Large Public Health Data Collection Project in Uganda: Methods, Tools, and Lessons Learned | Journal of Research Practice

A research protocol was developed by the Health Systems and Human Resources Team within the Health Economics, Systems and Integration Branch in the Division of Global HIV/AIDS at the US Centers for Disease Control and Prevention (CDC). The CDC entered into a cooperative agreement with the University of Washington (UW) in September 2010, allocating USD 1,170,588 over 3 years. The UW, in turn, contracted with its long-term partner, Makerere University, Uganda, and structured its contract based on deliverables, such as the number of facilities from which data were to be collected and the number of manuscripts to be drafted. A deliverables-based contract (rather than paying on budget line- items for individual cost elements such as staffing, supplies, or travel) conveyed our trust in the Ugandan partner to deliver study products with little direct supervision by the UW. Early in the project, the collaborators developed a list of likely manuscripts to emerge from the project, with a lead author identified for each manuscript. Our experience was that researchers with first-author responsibility would ensure the required information was collected in relation to their manuscripts throughout the planning, data collection, and analysis processes. Detailed notes of our twice-monthly conference calls and trip reports were kept to track team decisions and progress. We established a password-protected project website to store materials such as meeting minutes, data variables, data dictionaries, analysis plan, links to other source material, training materials, questionnaires, abstracts, bibliographies, and manuscripts.
Show more

20 Read more

Tools and Methods for data collection in ethnobotanical studies of homegardens

Tools and Methods for data collection in ethnobotanical studies of homegardens

During these interviews, we walk with the gardeners through their homegardens. We point to every plant species and ask them the same set of pretested questions with a combination of precoded check-the-box ques- tions, fill-in-the-blank questions, and open-ended questions (Martin 1995; Alexiades and Sheldon 1996). The type of question depends on the topic to be asked and on our knowledge of possible answers in the queried domains. As a result, we have a data sheet for every plant species in every homegarden. The name(s) of the observed species are recorded as given by the gar- dener. This might be a name in local dialect or language, but it could be a name from a book or a commercial seed package. Regardless of the source of the name, we record it exactly as stated by the gardener. Our main concern is to get the specific gardener’s name(s) for each species, which is critical for reducing the risk of misinterpreting our informant’s statements. Voucher specimens or pictures are additional tools to prompt recall of particular spe- cies. When the respondent gives no name for a particular plant, we leave the space for the vernacular name blank, although it is tempting to add what we already know to be the common name. Nevertheless, in all cases, the scien- tific name for the species has to be ascertained at that moment to ensure that we can find the same species again if necessary.
Show more

22 Read more

<p>A narrative review of data collection and analysis guidelines for comparative effectiveness research in chronic pain using patient-reported outcomes and electronic health records</p>

<p>A narrative review of data collection and analysis guidelines for comparative effectiveness research in chronic pain using patient-reported outcomes and electronic health records</p>

The PORT platform includes the integration of the CHOIR system into pain medicine clinics and linkage of its database to outpatient and inpatient EHR data. It provides an ideal data source to conduct high-quality practice-based CER in a Learning Healthcare System. With the PROMIS mea- sures included in CHOIR and the customized patient PDF report for each survey, clinicians and researchers are able to evaluate each patient with tools that are both normalized and validated for the US population. Continuous collection of these measures at each appointment greatly facilitates personalization of treatment plans and evaluation of prior interventions. As of this publication date, we have collected over 60,000 surveys in > 24,0000 unique patients, with each patient’s PRO data cross-linked to their corresponding EHR data. On average, CHOIR data are captured in 70% of our patients. New surveys are accrued at a rate of approximately 500 per week.
Show more

10 Read more

Survey of Learning Analytics based on Purpose and Techniques for Improving Student Performance

Survey of Learning Analytics based on Purpose and Techniques for Improving Student Performance

Academic analytics is for the institution level. Institution can use academic analysis to know the success of the students. It can also be used to get the attention of public. Report of the analysis can be used for the publicity of the institution. There are two types of analytics that is possible on an educational data. Course level analytics is based on the analysis on the course. Both the learners and educators are benefitted by this type of analysis. Aggregate analytics is based on aggregate analysis which involves predictive modeling and the pattern of success or failure of the course. This paper is organized as review of various Learning Analytics Applications used in the university education system. The purpose of the Learning Analytics is reviewed. Then the different type of students involved in education system is also considered. Various tools, techniques and data collection methods used to implement the Learning analytics is discussed.
Show more

5 Read more

The AmeriFlux data activity and data system: an evolving collection of data management techniques, tools, products and services

The AmeriFlux data activity and data system: an evolving collection of data management techniques, tools, products and services

Abstract. The Carbon Dioxide Information Analysis Center (CDIAC) at Oak Ridge National Laboratory (ORNL), USA has provided scientific data management support for the US Department of Energy and international climate change sci- ence since 1982. Among the many data archived and avail- able from CDIAC are collections from long-term measure- ment projects. One current example is the AmeriFlux mea- surement network. AmeriFlux provides continuous measure- ments from forests, grasslands, wetlands, and croplands in North, Central, and South America and offers important insight about carbon cycling in terrestrial ecosystems. To successfully manage AmeriFlux data and support climate change research, CDIAC has designed flexible data systems using proven technologies and standards blended with new, evolving technologies and standards. The AmeriFlux data system, comprised primarily of a relational database, a PHP- based data interface and a FTP server, offers a broad suite of AmeriFlux data. The data interface allows users to query the AmeriFlux collection in a variety of ways and then sub- set, visualize and download the data. From the perspective of data stewardship, on the other hand, this system is de- signed for CDIAC to easily control database content, auto- mate data movement, track data provenance, manage meta- data content, and handle frequent additions and corrections. CDIAC and researchers in the flux community developed data submission guidelines to enhance the AmeriFlux data collection, enable automated data processing, and promote standardization across regional networks. Both continuous flux and meteorological data and irregular biological data collected at AmeriFlux sites are carefully scrutinized by CDIAC using established quality-control algorithms before the data are ingested into the AmeriFlux data system. Other tasks at CDIAC include reformatting and standardizing the
Show more

12 Read more

Acceptability of donor breast milk banking, its use for feeding infants, and associated factors among mothers in eastern Ethiopia

Acceptability of donor breast milk banking, its use for feeding infants, and associated factors among mothers in eastern Ethiopia

Ensuring data quality After we had reviewed similar studies, the structured interview-based questionnaire was developed. The tool was first prepared in English and then translated into the local languages (Afan Oromo, Amharic and Somali) for an interview purpose, and finally it was translated back into English for the data analysis. The questionnaire was pretested on 5 % of the total sample size at a different area from the actual data collection site. The data were collected by eight BSC nurses who were given a two-day intensive train- ing course on the content of data collection tools, ob- jectives and how to interview the study subjects. The collected data were checked for completeness each day
Show more

10 Read more

The Effect of Mental Health on Job Satisfaction of Principals (Case Study: Islamic Azad University of Tehran-East Branch)

The Effect of Mental Health on Job Satisfaction of Principals (Case Study: Islamic Azad University of Tehran-East Branch)

Research Tools (Data Collection Tools): In this research, questionnaire was the data collecting tool. The questionnaires were consisted of an introduction which familiarized users with the research type, goals and research application in the field of mental health and job satisfaction of educational principals. In this part, the respondent has given some information about her/his gender, marital status, major of study, education level and the management experience to the researcher. The second part of the questionnaires was consisted of 30 standardized questions about general health and 19 standardized questions about job satisfaction.
Show more

10 Read more

Data Analysis: Types, Process, Methods, Techniques and Tools

Data Analysis: Types, Process, Methods, Techniques and Tools

the hypothesis and evaluate the outcomes [11]. Data collection methods can be divided into two categories: secondary methods of data collection and primary methods of data collection. Secondary data is a type of data that has already been published in books, newspapers, magazines, journals, online portals, etc. There is an abundance of data available in these sources about your research area in business studies, almost regardless of the nature of the research area. Therefore, the application of the appropriate set of criteria to select secondary data to be used in the study plays an important role in terms of increasing the levels of research validity and reliability. Primary data collection methods can be divided into two groups: quantitative and qualitative. Quantitative data collection methods are based in mathematical calculations in various formats. Methods of quantitative data collection and analysis include questionnaires with closed-ended questions, methods of correlation and regression, mean, mode and median and others. Quantitative methods are cheaper to apply and they can be applied within a shorter duration of time compared to qualitative methods. Moreover, due to a high level of standardization of quantitative methods, it is easy to make comparisons of findings. Qualitative research methods, on the contrary, do not involve numbers or mathematical calculations. Qualitative research is closely associated with words, sounds, feelings, emotions, colors and other elements that are non-quantifiable [12].
Show more

6 Read more

Choosing the right data collection mode. for a changing. research landscape

Choosing the right data collection mode. for a changing. research landscape

The research industry often uses terms like nationally rep- resentative to describe sam- ples, and when using proba- bility samples via postal mail, or RDD telephone, it’s almost true that we are sampling from the entire population of interest. But in fact, even with these methods, we’re still sampling from a frame. And the frame, if you remember from Sampling 101, is the list of people you can interview. Research results are only predictable to that frame. An online sampling frame, whether an access panel, or a river sample or any online source, is tiny relative to the population. Therefore if you change sampling frames by switching sampling suppliers for example, you risk chang- ing the underlying population and therefore the results. In many countries across the globe, we also have significant coverage issues with online sample.
Show more

7 Read more

The European Institute for Gender Equality annual report 2011

The European Institute for Gender Equality annual report 2011

To facilitate monitoring of gender equality advancements by the Member States within various EU policy areas, carry out cross-European comparisons and identify gender ine- qualities and data gaps, EIGE developed the fi rst centralised source of gender statistics and data found in its Women and men in the EU: Facts and fi gures database. This easily accessible online database comprises gender statistics, metadata and data sources, and off ers harmonised and comparable baseline information. Likewise, progress in building a broader centralised statistical information system on gender grew in 2011 in the work on developing the Gender Equality Index. After analysis and selection of the various dimensions to be measured, the Index will provide the EU with a solid and robust tool enabling a comparable assessment of progress on gender equality in the Member States. To further support policy implementation through the collection, processing and dis- semination of methods, tools, and good practices for gender equality and gender main- streaming, EIGE started to produce a mass of diverse information on one of the crucial gender mainstreaming tools — gender training. The goal of this tool is to provide policymakers, researchers, practitioners and training providers with resources that will eff ectively support their eff orts in mainstreaming gender.
Show more

56 Read more

Big Data Analytics Tools, Methods & Frameworks: A Comprehensive Review

Big Data Analytics Tools, Methods & Frameworks: A Comprehensive Review

Big data refer to the collection of new information which must be made handy to high numbers of users close to real time, based on gigantic data inventories from multiple sources, with the goal of speeding up critical competitive knowledge discovery processes. Massive amounts of data have become accessible on hand to data miners which is making analysis and decision making task much more challenging and tedious. Considering the massive volume and variety of data, the analyses, predictive and behavioral exploration of situations and business intelligence workloads are beyond the capabilities of existing tools &methods. In recent years a number of Big Data tools & methods have been suggested to handle these massive quantities of data. The objective of this paper is to study and to get the in- depth understanding of the various attributes of big data science, engineering, tools &techniques. This study also analyze the several frameworks suggested by researchers and abilities of these frameworks to revolutionize knowledge discovery process for enhancing the decision making process. This objective is considered via wide ranging review of literature.
Show more

6 Read more

Developing collection management tools to create more robust and reliable linguistic data

Developing collection management tools to create more robust and reliable linguistic data

File systems and naming conventions are often developed on an ad-hoc basis and may go through several stages of evolution throughout the course of a documentation project. Metadata may be recorded in a variety of different ways, e.g., in a spreadsheet, a dedicated metadata edi- tor, a text document, a field notebook, or a cus- tom database. Depositing these data into an ar- chive thus requires the linguist to reorganize da- ta, file names, and descriptive metadata in order to satisfy the requirements of the receiving ar- chive. And because different archives require different deposit formats, the linguist must in some cases repeat this process multiple times. For example, a researcher receiving funding from multiple sources may have to satisfy multi- ple archiving requirements. As a result even well-intentioned researchers may postpone or even forgo archiving altogether. What these re- searchers lack is a tool to assist with the organi- zation of their collections of data and metadata. While some useful tools have been developed, such as SayMore and CMDI Maker, the lack of uptake among the community of documentary linguists suggests that more development work is needed.
Show more

6 Read more

Mothers’ and health workers’ perceptions of  participation in a child friendly health  initiative in rural South Africa

Mothers’ and health workers’ perceptions of participation in a child friendly health initiative in rural South Africa

Data analysis: Quantitative data analysis: Mothers’ data from the post clinic interviews were entered into an Access database and imported into STATA 11 for analy- sis. Descriptive statistics were used to quantify and de- scribe the demographic characteristics of the participants including mothers and health workers. Qualitative data analysis: Focus groups and in-depth interviews were transcribed verbatim, translated from IsiZulu to English, imported into ATLAS. ti version 7 for analysis [34,35], and organised using the themes explored in the focus group and interview guides. Categories were reviewed for redundancy and similar codes and categories grouped under a single higher order category. Higher order cate- gories which resulted from collapsing codes with similar ideas together reflected the important thematic areas linked to the focus group and interview guide categories.
Show more

9 Read more

Energy Efficient Data Collection Methods in Wireless Sensor Networks:  A Survey

Energy Efficient Data Collection Methods in Wireless Sensor Networks: A Survey

In TDMA based protocols, time is divided into many slots and each time slot is assigned for node’s transmission and reception. In [22, 23, 24] clustering is done and cluster head assigns the timeslots for each node. A low complexity time slot allocation mechanism is proposed in [25] which is named as light weight medium access protocol (LMAC). Here the nodes select the time slots randomly. Control messages and data units are send directly so that the energy needed for preamble transmission can be saved. Drawback of LMAC is the fixed frame length. So in [26], adaptive information centric LMAC (AI-LMAC) is proposed in which slots can be selected based on traffic needs. In [27] traffic adaptive MAC (TRAMA) is introduced. This protocol helps to reduce collision during packet transmission and also allows nodes to switch to low power state whenever they are not transmitting or receiving. TRAMA finds the traffic load of each node and avoids the assignment of timeslots to nodes which have no data to send. FLAMA (Flow Aware Medium Access) [28] is related to TRAMA and it is mainly used for periodic monitoring applications. This protocol mainly focuses on reducing the unnecessary traffic information exchange. Here a pull based mechanism is used so that the data is transferred only when it is requested. In [29], a traffic adaptive periodic data collection MAC (TA-PDC-MAC) is proposed to reduce the energy consumption of sensor nodes used in environmental monitoring applications. Unlike the existing PDC-MAC protocol TA-PDC-MAC supports network with different data generation rate. Here the sink node will compute the time schedule for each node and this staggered time schedule will help to reduce the delay in the network and also saves energy needed for idle listening. In [30], TDMA scheduling with adaptive slot stealing and parallelism (TDMA-ASAP) is proposed which uses parallel transmissions, slot stealing and adaptive sleeping between transmissions for energy saving. TDMA-ASAP allows the network to adapt changing conditions but it is only applicable during periods of light load. In [31], a TDMA based coloring algorithm is proposed which make use of the spatial reuse of the channel for data transmission. An advanced version of this method which uses parallelism and allocates time slot in an energy efficient way is proposed in [32]. Here a MAC protocol named as I queue MAC is introduced which make use of the queue length of each node to understand the need of time slots and allocate it based on the queue length.
Show more

9 Read more

Show all 10000 documents...