Top PDF NASA Making Earth System Data Records for Use in Research Environments (MEaSUREs) Global Food Security-support Analysis Data (GFSAD) Cropland Extent 2015 South Asia, Afghanistan, Iran 30 m V001
The two most common methods for land-cover mapping over large areas using remote-sensing images are manual classification based on visual interpretation and digital per-pixel classifica- tion. The former approach delivers products of high quality, such as the European CORINE Land Cover maps (Büttner, 2014). Although the human capacity for interpreting images is re- markable, visual interpretation is subjective (Lillesand et al., 2014), time-consuming, and ex- pensive. Digital per-pixel classification has been applied for land-cover mapping since the ad- vent of remote sensing and is still widely used in operational programs, such as the 2005 North American Land Cover Database at 250-m spatial resolution (Latifovic, 2010). Pixel-based clas- sifications such as maximum likelihood classifier (MLC), neural network classification (NN), decision trees, Random Forests (RF), and Support Vector Machines are powerful, and fast clas- sifiers that help differentiate distinct patterns of landscape. Both supervised and unsupervised classification approaches are adopted in pixel-based classifiers. However, per-pixel classifica- tion includes several limitations. For example, the pixel’s square shape is arbitrary in relation to patchy or continuous land features of interest, and there is significant spectral contamination among neighboring pixels. As a result, per-pixel classification often leads to noisy classifica- tion outputs, the well-known “salt-and-pepper” effect. There are other limitations of pixel- based classification methods: 1. they fail to capture the spatial information of high-resolution imagery such as from Landsat 30-m imagery, and 2. they often, classify the same field (e.g., a corn field) into different classes, as a result of within-field variability. This may often result in a field with a single crop (e.g., corn) classified as different crops.
The goal of the GlobalFoodSecurity-supportAnalysisData @ 30-m (GFSAD30) project is to provide the highest resolution, objective cropland datasets to assist and address globalfood and water security issues in the twenty-first century. The project proposed developing cropland products using time-series Landsat and Sentinel satellite sensor data, machine learning algorithms, and cloud-based computing. The project is funded by the National Aeronautics and Space Administration (NASA) with supplemental funding from the United States Geological Survey (USGS). The project is led by USGS and carried out in collaboration with NASA AMES, University of New Hampshire (UNH), California State University Monterey Bay (CSUMB), University of Wisconsin (UW), NASA GSFC, and Northern Arizona University (NAU). There were a number of International partners, including The International Crops Research Institute for the Semi-Arid Tropics (ICRISAT).
This research was supported by the CGIAR Research Program Climate Change, Agriculture and FoodSecurity (CCAFS), the CGIAR Research Program Water, Land and Ecosystems (WLE) which are carried out with support from the CGIAR Trust Fund and through bilateral funding agreements. For details visit https://ccafs.cgiar.org/donors and https://wle.cgiar.org/ donors. Funding was also from NASA MEaSUREs, through the NASA ROSES solicitation, for a period of 5 years (1 June 2013- 31 May 2018). The NASAMakingEarthSystemDataRecords for Use in ResearchEnvironments (MEaSUREs) grant number is NNH13AV82I and the USGS Sales Order number is 29039. We gratefully acknowledge this support. The United States Geological Survey (USGS) provided signi ﬁ cant direct and indir- ect supplemental funding through its Land Resources Mission Area (LRMA), National Land Imaging Program (NLIP), and Land Change Science (LCS) program. We gratefully acknowledge this. This research is a part of the GlobalFoodSecurity – sup- port AnalysisData Project at 30-m (GFSAD30). The project was led by USGS in collaboration with NASA Ames, University of New Hampshire (UNH), University of Wisconsin (UW), NASA Goddard Space Flight Center (GSFC), and the Northern Arizona University (NAU). Finally, special thanks to Ms. Susan Benjamin, Director of Western Geographic Science Center (WGSC) of USGS, Mr. Larry Ga ﬀ ney, Administrative o ﬃ cer of WGSC of USGS for their support and encouragement through- out this research. We would like to thank to Dr Sunil Dubey, Assistant director, MNCFC for providing sub-national statistics.
able 1: Prevalence of acute malnutrition by state
The nutrition situation in South Sudan is concerning. Northern Bahr el Ghazal and Warrap have the highest levels of under nutrition (24.2% and 17.6%, respectively). In the two states, GAM has been consistently above the emergency threshold (greater than 15%). Key contextual factors related to poor food consumption and inadequate maternal/child care correlate with the poor nutrition situation. The persistent inter-communal conflicts in parts of the Greater Bahr el Ghazal coupled with increasing food prices has compromised access to and consumption of food and is affecting child nutrition. Additionally, these states have registered very high levels of morbidity among children and wasting among women. A strong correlation was found between these two indicators (morbidity and wasting among women) and acute malnutrition in children. Nutrition indicators were not computed for Unity, Jonglei and Upper Nile states during this current assessment as a result of inadequate sample for children. However, SMART surveys that have been conducted in the Greater Upper Nile states (prior to this assessment) indicate that GAM rates in the majority of areas is above the emergency threshold (>15%), with the worst malnutrition levels observed in Unity. Extremely high levels of GAM have also been reported in Protection of Civilian sites and among IDPs (34% GAM in Bentiu POC and 22% GAM among Renk IDPs). High levels of acute malnutrition in these states
A preliminary attempt is made in this study to construct a foodsecurity index (FSI) for SouthAsia drawing on earlier exercises (mostly using cross-sectional/survey data) undertaken at the country/region level. While it is not possible to cover all key variables 2 in this FSI, an attempt is made to reflect some of the critical variables (covering various aspects of foodsecurity) on which time series data are available for all the countries included in the construct of the index. These indicators include per capita food availability index, per capita food production index, self-sufficiency ratio index, and inverse of relative food price index (clearly there are other factors/indicators the inclusion which may improve the index provided that time series data is available). Food availability per capita clearly is a critical factor in determining foodsecurity and was given 50 percent weight. Food production in most of the countries is a key factor that has bearing on level of food consumption but not itself a sufficient indicator thus assigned a weight equal to 1/6. The inverse of relative food price and the self-sufficiency ratio indices were also given weight of 1/6 each. The real food prices index is an indicator for affordability (access) and self-sufficiency index represent the extent of exposure of a country to external quantitative and price shocks. The weights used are similar to those used in Report of Task Force on Food
SERVIR is a joint development initiative of NASA and USAID, working in partnership with leading regional organizations around the globe, to help developing countries use information provided by Earth observing satellites and geospatial technologies to address FoodSecurity, Water and Disasters, Weather
and Climate, and Land Use/Land Cover Change.
General public have been using Cloud Computing without the awareness of the term in the form of Internet services like Hotmail (since 1996), YouTube (since 2005), Facebook (since 2006) and Gmail (since 2007). Hotmail is probably the first Cloud Computing application that allowed the general public to keep their data in the form of text and image files at remote servers, provided and managed by others. In the commercial sector, Amazon.com was one of the first vendors to provide storage space, computing resources and business functionality following the Cloud Computing model. In 2006, they launched Elastic Compute Cloud (EC2) that allowed companies and individuals to rent computers to run their own enterprise applications and services. Salesforce.com, founded in 1999, pioneered the concept of delivering enterprise applications as Cloud-based services to enterprises.
The trends we have noted are predominantly aimed at cost efficiency. However, economic constraints are but one aspect of what it means for much of the world’s poor to use mobile phones in various ways. The available observable data on basic and feature phones can be revealing of how social and cultural norms may (or may not) be changing in relation to increased communication capacities. In what follows, we present a range of questions related to these trends and the theoretical gains that can be made from them. We focus on “the data inadvertently generated in the everyday use of technology, and [that require] no special device or software to be given to the subject” (Blumenstock, 2012, p. 110) because they can be useful where interviews and surveys are difficult or where direct interventions are not possible or scalable. That mobile Internet is limited in use does not mean that mobile data on mobile use(r)s cannot be employed to answer questions beyond connectivity. The metrics and methodologies using observable data are often replicable and do provide a useful way to do cross-country work (Molony, 2012). Used in concert with other research methods, these observational data can strengthen theorizing on the interaction of mobile and social life in a much more global manner than current literature suggests.
Large-scale studies are needed to increase our understanding of how large-scale conserva- tion threats, such as climate change and deforestation, are impacting diverse tropical ecosystems. These types of studies rely fundamentally on access to extensive and repre- sentative datasets (i.e., “big data”). In this study, I asses the availability of plant species occurrence records through the Global Biodiversity Information Facility (GBIF) and the dis- tribution of networked vegetation census plots in tropical South America. I analyze how the amount of available data has changed through time and the consequent changes in taxo- nomic, spatial, habitat, and climatic representativeness. I show that there are large and growing amounts of data available for tropical South America. Specifically, there are almost 2,000,000 unique geo-referenced collection records representing more than 50,000 species of plants in tropical South America and over 1,500 census plots. However, there is still a gaping “data void” such that many species and many habitats remain so poorly represented in either of the databases as to be functionally invisible for most studies. It is important that we support efforts to increase the availability of data, and the representativeness of these data, so that we can better predict and mitigate the impacts of anthropogenic disturbances.
Integrating Big Data as support for strategic decision-making process resides into the contribution of the three Vs: (i) data volume is a critical factor in the current challenging business context. On one hand, data are everywhere and organizations need for new systems to store them. On the other hand, there are many unexplored data sources that organisations want to discover for better predicted outcomes with greater precisions (George et al.; 2014); (ii) Velocity allows to take decisions timely that can lead organization to wear a market leader role and; (iii) Variety lets to endorse the available information sources. Strategic information is not strictly represented in database formats but can be also located in sources hard to access, i.e. e-mails, internal companies‘ documents, mental databases (Forrester, 1994; Madden, 2012; Lin et al.; 2014, Kwon et al.; 2014).
Figure 4: Monthly Number of DOI Requests Submitted by DAACS and Registered by the ESDIS DOI System
as of February 2017.
• The plan to make DOI registration a mandatory requirement for the metadata submitted to the Common Metadata Repository (CMR 2017) that is used for searching EOSDIS data products by users. In addition, data citations are becoming more common, and more publishers are mandating data citations that include DOIs (COPDESS 2015). Various publishers encourage citation of datasets that include a persis- tent method for identification that is machine-actionable, globally unique, and widely used by a community (DCSG 2014). ESDIS DOI registration has been in place for the last five years, and it will take some time for citations of datasets using the DOIs to grow in scientific literature. To examine the extent of their use, a few Google Scholar searches were performed to collect information on the articles that cite “10.5067” and “NASA” in the references or the text of articles. The referenced DOIs appeared in refereed journal articles, books, and posters. After excluding articles that seemed not to be relevant for this analysis, there were over 370 articles in 2016 that cite DOIs compared to approximately 100 in 2015. These counts do not account for multiple DOIs referenced in one article. This shows a significant increase in the usage of ESDIS-registered DOIs in the citations. Such usage is expected to show a multifold increase as additional data products are being assigned DOIs and publishers are requiring authors to provide data citations.
Performance data quoted represents past performance; past performance does not guarantee future results. The investment return and principal value of an investment will fluctuate so that an investor’s shares, when redeemed, may be worth more or less than their original cost. Current performance of the Fund may be lower or higher than the performance quoted. Performance data current to the most recent month end may be obtained by calling 855.244.4859. The Fund imposes a 2.00% redemption fee on shares redeemed within 60 days. Performance data does not reflect the imposition of the redemption fee and if it had, performance would have been lower. Performance shown including sales charge reflects the Class A maximum sales charge of 4.75% and the Class C Contingent Deferred Sales Charge (CDSC) of 1.00%. Performance data excluding sales charge does not reflect the deduction of the sales charge or CDSC and if reflected, the sales charge or fee would reduce the performance quoted. Investment performance reflects fee waivers, expenses and reimbursements in effect. In the absence of such waivers, total return and NAV would be reduced. Classes A and C were incepted on May 1, 2012 and Class Y was incepted on December 1, 2011. The FTSE EPRA/NAREIT Developed Index references Class A’s inception date.
Sandven, Stein ; Sofieva, Viktoria F ; Wagner, Wolfgang
Abstract: The question of how to derive and present uncertainty information in climate datarecords (CDRs) has received sustained attention within the European Space Agency Climate Change Initiative (CCI), a programme to generate CDRs addressing a range of essential climate variables (ECVs) from satellite data. Here, we review the nature, mathematics, practicalities, and communication of uncertainty information in CDRs from Earth observations. This review paper argues that CDRs derived from satellite- based Earth observation (EO) should include rigorous uncertainty information to support the application of the data in contexts such as policy, climate modelling, and numerical weather prediction reanalysis. Uncertainty, error, and quality are distinct concepts, and the case is made that CDR products should follow international metrological norms for presenting quantified uncertainty. As a baseline for good practice, total standard uncertainty should be quantified per datum in a CDR, meaning that uncertainty estimates should clearly discriminate more and less certain data. In this case, flags for data quality should not duplicate uncertainty information, but instead describe complementary information (such as the confidence in the uncertainty estimate provided or indicators of conditions violating the retrieval assumptions). The paper discusses the many sources of error in CDRs, noting that different errors may be correlated across a wide range of timescales and space scales. Error effects that contribute negligibly to the total uncertainty in a single-satellite measurement can be the dominant sources of uncertainty in a CDR on the large space scales and long timescales that are highly relevant for some climate applications. For this reason, identifying and characterizing the relevant sources of uncertainty for CDRs is particularly challenging. The characterization of uncertainty caused by a given error effect involves assessing the magnitude of the effect, the shape of the error distribution, and the propagation of the uncertainty to the geophysical variable in the CDR accounting for its error correlation properties. Uncertainty estimates can and should be validated as part of CDR validation when possible. These principles are quite general, but the approach to providing uncertainty information appropriate to different ECVs is varied, as confirmed by a brief review across different ECVs in the CCI. User requirements for uncertainty information can conflict with each other, and a variety of solutions and compromises are possible. The concept of an ensemble CDR as a simple means of communicating rigorous uncertainty information to users is discussed. Our review concludes by providing eight concrete recommendations for good practice in providing and communicating uncertainty in EO-based climate datarecords.
Since this investigation was funded by the European Commission as part of the ESWIRP project, the available budget only allowed for testing over a limited range of conditions. The test plan for the 5-day test campaign in the ETW was determined based on a compromise between test requirements from the European project group chaired by J.L. Goddard from ONERA-France which focused on acquiring data for CFD validations of unsteady wake flows and a repeat of the conditions at which the used CRM model had been tested in the NTF. A few polars were added at a very low Reynolds number to provide comparative aerodynamic data for the Japanese research organisation JAXA who have tested the CRM in a downscaled version in their transonic tunnel.
In addition to the extended validation described above, triple collocation techniques (McColl et al., 2014) have been used to assess uncertainty estimates in near-surface wind speed (Stoffelen, 1998), soil moisture (Gruber et al., 2016), and other remotely sensed variables. For valid quantitative estimation or validation of uncertainty, this technique re- quires three sources of collocatable data with errors that are independent and random (both between the data sources and within each data source) and assumes that sampling mis- matches and differences in the definition of the measurands between the three types of data are negligible. Other uncer- tainty validation methods are briefly reviewed in Sofieva et al. (2014). The uncertainty arising specifically from instru- ment noise can be validated using an Earth target that is as- sumed not to vary, e.g., white sands in New Mexico for re- flectance validation. In this case, validation is not against in- dependent measurements, but it is performed by using re- peated observations by the same instrument. Such analy- ses would be more robust if the geophysical standard could be traced to a more controlled reference, which would re- quire more support for repeated accurate measurements of the Earth target from the ground (Loew et al., 2017). For categorical ECVs, such as land cover type, a degree of un- certainty validation can be obtained by verifying that the es- timated misclassification rates in the product are stable with respect to reasonable ranges of classification parameters. For instance, if classification is based on training a classifier us- ing a dataset split into calibration and validation (“train” and “test”) subsets, the process can be repeated many times with a different random division into train and test subsets, which allows the dispersion in the misclassification rates to be char- acterized.
Although Unique, NASA still faces the similar challenges as other Big Data users... Federal, Industry and Academia
NASA’s “Big Data” challenges go beyond the stereotypical “Big Data” problem-space -- involving warehousing and mining of unstructured transactional business data. This should be no surprise to the ESIP community. Many of you have significant day-to-day experience with Big Data and may be ahead of NASA on the innovation curve.
As a Geosoft VOXI customer, you have entrusted Geosoft to help protect your data. Geosoft values this trust, and the privacy and security of your data is one of our top concerns. Geosoft strives to take an industry leadership role when it comes to security, privacy, and compliance practices.
As shown in Table 2, the submodels of SOCs in ESMs differ in the number of SOC pools, and temperature and moisture functions. Todd-Brown et al. (2013) reported that the ESM outputs and observational SOC database results did not produce consistent patterns for soil carbon pools, temperature, and moisture sensitivity functions. Exbrayat et al. (2014) found that the turnover times of SOC in the ESM outputs were not affected by the number of SOC pools. Our analyses also indicated that a match or mismatch of major contributing factors between the ESM outputs and observa- tional database results is not strongly related to these prop- erties of SOC submodels. Thus, the spatial pattern of SOC from ESMs are likely more strongly affected by the basic structure, driving variables (NPP and temperature), and pa- rameterizations (turnover time, and temperature and mois- ture sensitivity, which are influential) than by the number of pools and the types of temperature and moisture sensitivity functions. From a mathematical perspective, the similarity is likely fundamentally based on the description of these SOC dynamics by a series of first-order linear ordinary differen- tial equations that are not autonomous (Manzoni and Porpo- rato, 2009; Sierra and Müller, 2015). With these equations, the outputs generally do not show chaotic behaviors.