Fig 8. Histogram RGB 123
Also for comparison the Mostistea surface we used and satellite remotesensingimagery provided by SPOT in 2007 (Fig.9).
The current plans for SPOT – 5 envision the replacement of the SPOT – 4 HRVIR systems with high resolution (HRG) instruments. These system are designed to provide higher spatial resolution ( 5m, instead of 10m ) in panchromatic mode; 10m ( instead of 20m ) resolution in the green, red, and near – IR bands; with 20 m resolution maintained in the mid – IR band due to limitations imposed by the geometry of CCD sensors used in this band. The panchromatic band used will return to the spectral range employed in SPOT – 1, 2 and 3 (0.51 – 0.73 μ m ). Also envisioned is the provision of resolution panchromatic data by combining two 5-m resolution images shifted along track and sampled every 2.5 m.
Abstract: Hyperspectral remotesensingimagery contains much more information in the spectral domain than does multispectral imagery. The consecutive and abundant spectral signals provide a great potential for classification and anomaly detection. In this study, two real hyperspectral data sets were used for anomaly detection. One data set was an Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data covering the post-attack World Trade Center (WTC) and anomalies are fire spots. The other data set called SpecTIR contained fabric panels as anomalies compared to their background. Existing anomaly detection algorithms including the Reed–Xiaoli detector (RXD), the blocked adaptive computation efficient outlier nominator (BACON), the random selection based anomaly detector (RSAD), the weighted-RXD (W-RXD), and the probabilistic anomaly detector (PAD) are reviewed here. The RXD generally sets strict assumptions to the background, which cannot be met in many scenarios, while BACON, RSAD, and W-RXD employ strategies to optimize the estimation of background information. The PAD firstly estimates both background information and anomaly information and then uses the information to conduct anomaly detection. Here, the BACON, RSAD, W-RXD, and PAD outperformed the RXD in terms of detection accuracy, and W-RXD and PAD required less time than BACON and RSAD.
(2012) provided a comprehensive framework that includes all potentially relevant indicators that can be used for image-based slum identification [22]. Forestier et al. (2013) built a coastal zone ontology to extract coastal zones using background and semantic knowledge [23]. Kyzirakos et al.
(2014) provided wildfire monitoring services by combining satellite images and geospatial data with ontologies [24]. Belgiu et al. (2014a) presented an ontology-based classification method for extracting types of buildings where airborne laser scanning data are employed and obtained effective recognition results [25]. Belgiu et al. (2014b) provided a formal expression tool to express object-based image analysis technology through ontologies [26]. Cui (2013) presented a GEOBIA method based on geo-ontology and relative elevation [27]. Luo (2016) developed an ontology-based framework that was used to extract land cover information while interpreting HRS remotesensing images at the regional level [28]. Durand et al. (2007) proposed a recognition method based on an ontology which has been developed by experts from the particular domain [29]. Bannour et al. (2011) presented an overview and an analysis of the use of semantic hierarchies and ontologies to provide a deeper image understanding and a better image annotation in order to furnish retrieval facilities to users [30]. Andres et al. (2012) demonstrate that expert knowledge explanation via ontologies can improve automation of satellite image exploitation [31]. All these studies focus either on a single thematic aspect based on expert knowledge or on a specific geographic entity. However, existing studies do not provide comprehensive and transferable frameworks for objective modelling in GEOBIA. None of the existing methods allows for a general ontology driven semantic classification method. Therefore, this study develops an object-based semantic classification methodology for high resolution remotesensingimagery using ontology that enables a common understanding of the GEOBIA framework structure for human operators and for software agents. This methodology shall enable reuse and transferability of a general GEOBIA ontology while making GEOBIA assumptions explicit and analysing the GEOBIA knowledge corpus.
Abstract. Estimation of noise contained within a remotesensing image is essen- tial in order to counter the effects of noise contamination. The application of convolution data-masking techniques can effectively portray the influence of noise. In this paper, we describe the performance of a developed noise-estimation tech- nique using data masking in the presence of simulated additive and multiplicative noise. The estimation method employs Laplacian and gradient data masks, and takes advantage of the correlation properties typical of remotesensingimagery. The technique is applied to typical textural images that serve to demonstrate its e ff ectiveness. The algorithm is tested using Landsat Thematic Mapper (TM) and Shuttle Imaging Radar (SIR-C) imagery. The algorithm compares favourably with existing noise-estimation techniques under low to moderate noise conditions.
APPROACH FOR ANALYZING URBAN ENVIRONMENTS 3
4.1 Introduction
Spectral mixture analysis (SMA) has been widely applied to address the mixed pixel problem, a typical issue associated with medium- and coarse-resolution remotesensingimagery (Powell et al., 2007; Roberts et al., 1992; Sabol et al., 1992; Settle and Drake, 1993). SMA assumes that each image pixel is comprised of several land cover classes, each of which has distinctive spectral signatures (Settle and Drake, 1993; Tompkins et al., 1997). Traditional SMA approaches, with a fixed set of endmembers, perform reasonably well in areas with relatively homogenous land covers, mostly due to the ease of identifying representative endmembers. In urban and suburban environments, however, inter-class and intra-class spectral variability widely exist (Kumar et al., 2013; Roth et al., 2012; Settle, 2006; Thorp et al., 2013; Youngentob et al., 2011). Therefore, the capability of traditional SMA models to deal with complex urban and suburban landscapes has been questioned, as the few endmembers may not be able to represent their corresponding land cover classes (Radeloff et al., 1999; Song, 2005; Tang et al., 2007).
Discussion
We developed general allometric models for estimating both the stem diameter and aboveground biomass of trees based on crown architectural properties which can be remotely sensed: tree height and crown diame- ter. Here, we discuss how these allometric models can be used to integrate remotesensingimagery – particu- larly ALS data – into forest monitoring programmes, allowing carbon stocks to be mapped with accuracy across forest landscapes and shedding light on the processes which govern the structure and dynamics of forest ecosystems.
Discussion
We developed general allometric models for estimating both the stem diameter and aboveground biomass of trees based on crown architectural properties which can be remotely sensed: tree height and crown diame- ter. Here, we discuss how these allometric models can be used to integrate remotesensingimagery – particu- larly ALS data – into forest monitoring programmes, allowing carbon stocks to be mapped with accuracy across forest landscapes and shedding light on the processes which govern the structure and dynamics of forest ecosystems.
4. CONCLUSIONS
Remotely sensed data as a classical big data, its services and applications have been becoming more and more widely. Cloud computing technology provides with a platform for remotely sensed big data services which can support the storage of huge volume of RSD and efficient on-demand access. However, to the best of our knowledge, there has been no report about RSD’s storage and usage security in cloud computing. The re-encryption method ensures RSD security in cloud computing and marking technique protect RSD’s copyright. As for remotesensing image (RSI), current most marking operations would cause a little information loss on the marked RSI. To further extend RSI’s exploration, a reversible marking method for remotesensingimagery is introduced. According to different applications, we can provide two versions: marked RSI and extracted RSI.
Keywords: Gaussian filtering, Histogram oriented gradient (HOG) feature extraction, DCNN
1 INTRODUCTION
In Remotesensing systems one of the most important features needed are roads, which require feature extraction to identify them from high-resolution satellite imagery. Nowadays, roads are changing rapidly than ever. In order to keep up with the development trend and adapt to the dynamic road data, updating the road data in real time will contribute to the partition of functional areas around the roads and the assessment of traffic capacity. Road recognition from remotesensingimagery can be divided into two phases: road detection and road classification. The roads are recognized according to the road classification results after detecting and extracting the road networks.
Abstract— Cloud is one of the most common interferers in Moderate Resolution Imaging Spectrum-radiometer (MODIS) remotesensingimagery. Because of cloud interference, much important and useful information covered by cloud cannot be recovered well. How to detect and remove cloud from MODIS imagery is an important issue for wide application of remotesensing data. In general, cloud can be roughly divided into the two types, namely, thin cloud and thick cloud. In order to effectively detect and eliminate cloud, an automatic algorithm of cloud detection and removal is proposed in this paper. Firstly, several necessary preprocessing works need to be done for MODIS L1B data, including geometric precision correction, bowtie effect elimination and stripe noise removal. Furthermore, through analyzing the cloud spectral characters derived from the thirty-six bands of MODIS data, it can be found the spectral reflections of ground and cloud are different in various MODIS bands. Hence, cloud and ground can be respectively identified based on the analysis of multispectral characters derived from MODIS imagery. Cloud removal processing mainly aims at cloud region rather than whole image, which can improve processing efficiency. As for thin cloud and thick cloud regions, the corresponding cloud removal algorithms are proposed in this paper.
Abstract. One of the most important methods to solve traffic congestion is to detect the incident state of a roadway. This paper describes the development of a method for road traffic monitoring aimed at the acquisition and analysis of remotesensingimagery. We propose a strategy for road extraction, vehicle detection and incident detection from remotesensingimagery using techniques based on neural networks, Radon transform for angle detection and traffic-flow measurements. Traffic-bottleneck detection is another method that is proposed for recognizing incidents in both offline and real-time mode. Traffic flows and incidents are extracted from aerial images of bottleneck zones. The results show that the proposed approach has a reasonable detection performance compared to other methods. The best performance of the learning system was a detection rate of 87% and a false alarm rate of less than 18% on 45 aerial images of roadways.
compliance matrix. I would recommend using remotesensingimagery to assist in the mapping and monitoring of wildfires in San Diego County, and I would particularly recommend using satellite imagery such as IKONOS, which provides better high spatial and radiometric detail. The high level of detail in satellite imagery and the ability of satellites to collect data from otherwise inaccessible locations make remotesensing technology a highly valuable and potentially vital tool for wildfire monitoring.
8.1 Summary and Discussion
In this thesis we investigated and developed advanced methods and systems for the retrieval of geo-/bio-physical variables from satellite remotesensingimagery. In particular, several issues related to different steps of the retrieval process as well as to its application to challenging real operational scenarios were addressed. For each considered topic, an analysis of the state-of-the-art was conducted. Starting from this analysis, novel solutions were proposed, implemented ad applied to remotesensing data to assess their effectiveness. The achieved results pointed out that the proposed solutions represent a valuable contribution for the exploitation of the remotesensing technology for mapping and monitoring natural resources and physical processes on the Earth surface, with particular regard to the mountain environment. This represents a hot topic in the scientific community especially in the last years, thanks to the potential offered by new generation and upcoming satellite remotesensing systems and the growing interest in the accurate and up-to-date mapping and monitoring of the Earth surface.
Abstract. We illustrate the utility of variational destriping for ocean color images from both multispectral and hyper- spectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Pop- ulation Laboratory’s (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destrip- ing problem using a variational regularization method by giv- ing weights spatially to preserve the other features of the im- age during the destriping process. The target functional pe- nalizes “the neighborhood of stripes” (strictly, directionally uniform features) while promoting data fidelity, and the func- tional is minimized by solving the Euler–Lagrange equations with an explicit finite-difference scheme. We show the accu- racy of our method from a benchmark data set which rep- resents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.
The latest advances in the remotesensing field have contributed significantly at the broad availability of high quality data or images. Accordingly, the develop- ment of efficient and robust algorithms for the analysis of these data is a very important topics. In particular, classification is one of most important tasks, especially for Earth observation. The classification problem, i.e., detecting and identifying the different land-covers that characterize a given geographical area of interest, is a complex process. Among the procedures involved in classifi- cation problem, feature extraction and fusion, aiming to extract and analyze all the useful information that different remotesensing data sets contain, is a necessary pre-processed step. As collecting ground-truth is often expensive and time consuming, the number of available training samples is almost al- ways much smaller than the dimensionality of the feature space. This leads to the Hughes phenomenon, i.e. for a limited number of training samples, the classification accuracy decreases as the dimension increases. The present thesis has focused on developing methodologies for feature extraction and fusion of re- mote sensing data. Specifically the proposed solutions relate to semi-supervised learning and to domain adaptation
Relating each object/pixel to one or more elements of defined labels in the study area is the objective of classification in remotely sensed data. Hence the radiometric information is commuted to thematic information 2 . The Fig. 1 depicts the classification process as a mapping function.
Quality of the remotesensing data doesn’t guarantee the accurate feature extraction but it offers the objectives of the user in an extensive way. The technique used for classification in remotesensing data is a significant factor because of the extensive availability of data with different characteristics. As the spatial resolution increases, the pixel based or sub pixel based approaches are not feasible due to several pixels representing a single object. The object based image analysis comes into existence which produces sound results 3 . The Synthetic Aperture Radar (SAR) is an active imagery system having the potential of imaging in “All weather all time” conditions. The number of SAR based satellites has increased in the past decade because of its vigorous impacts in the applications such as Ice Monitoring, Surface Deformation and Detection, Oil spills, Glacier Monitoring, Urban planning and in Military Applications. SAR
2 Department of Natural Resources, Chinese Culture University, 55, Hwa Kang Rd, Yangmingshan, Taipei, Taiwan
3 Department of Geography, Chinese Culture University, 55, Hwa Kang Rd, Yangmingshan, Taipei, Taiwan
ARTICLE INFO ABSTRACT
Landslide detection using satellite remotesensing images has been widely studied. This type of applications often involves either change detection or multi-spectral image classification methodologies. If there is only one set of satellite image, the change detection method has limited use. Collecting and analyzing training area data for image classification are costly and time consuming. This study, therefore, utilize only one SPOT satellite image data for estimating the normalized difference vegetation index (NDVI), and to segregate vegetated and non-vegetated areas of the Ta-An River Basin in Central Taiwan. Slope factor and textural feature are then used to identify the landslide area. Results indicate that the accuracy of landslide detection using NDVI alone is about 88%. Using NDVI with slope factor and textural feature increases overall accuracy to 97%. This study successfully demonstrates the capability of using one set of remotesensing image to map landslide area in a large river basin.
Figure 10 illustrates the impact of the intensity of interfered pixels in training samples on classification accuracy when the four classifiers are used for the Indian Pines image.. The[r]
The domain adaptation techniques used in this paper could be greatly improved as the quantity and quality of DIRSIG scenes improves as well. In the future, we could pull data from multiple DIRSIG scenes, increase the number of object classes in a given scene, in- crease the variability of the objects in the scene, and also improve the GSD of the scenes themselves. The RIT Signature Interdisciplinary Research Area, UAS Research Labora- tory has made some sizable investments in UAS technology, including a platform capable of simultaneously carrying a high-res RGB imaging system, LWIR microbolometer, LI- DAR, and VNIR HSI push-broom sensor. This UAS platform also carries a GPS/IMU with 1.5-3 cm resolution and a data acquisition unit that logs data from all of the sen- sors. Their future acquisitions include a new platforms capable of carrying payloads up to 24 lbs, a HSI sensor that extends spectral coverage into SWIR bands, a better thermal imaging system, among others. A UAS with this type of configuration could be seen as a high-fidelity DIRSIG scene generator; and with a bit of annotation, these scenes could be used to generate higher quality/quantity synthetic images, which could enable the next- generation of semantic segmentation frameworks for any remotesensing sensor. Future work should also explore a trade-off on how realistic the imagery actually needs to be;
EO-1 is part of NASA’s New Millennium Program, designed to validate new technologies for remotesensing. It was launched from Vandenberg Air Force Base on 21 November 2000 and placed in a sun- synchronous orbit with an altitude of 705 km and a 10:01 AM descending node, giving it an equatorial crossing time that is one minute behind Landsat-7 and a 16-day repeat path orbital cycle. With observations up to two paths off nadir, EO-1 is able to image the same location as many as 5 times every 16 days in daylight. The EO-1 payload is comprised of three instruments: Hyperion, Advanced Land Imager (ALI) and the Linear Etalon Imaging Spectral Array (LEISA) Atmospheric Corrector. We analyze data from the Hyperion instrument onboard the spacecraft.