A. HISTORY OF LIDARLiDAR is but one fascinating utilization of the laser, or Light Amplification by Stimulated Emission of Radiation. What was theorized by Townes and Schawlow 1 and named by Gould, 2 quickly became reality as Maiman and the Hughes Research Laboratory created the first working laser in 1960. 3 Fiocco and Smullen worked to analyze the upper atmosphere through a pulsed probing technique, arguably the first instance of a laser being employed as a remote sensor. 4 Shortly thereafter, the orientation was reversed and LiDAR was used to measure the topographic profile of a football stadium in Philadelphia. 5 These measurements took advantage of the inherently short wavelength of LiDAR. Laser pulses are directed at a target, usually the Earth, and then the sensor measures the time required for the pulse to return. This time is then used to calculate a distance, the accuracy of which is heavily dependent on the fidelity of the clock used to measure the time between pulse and return. When the laser pulse is returns to the LiDAR sensor it is collected by a detector with the purpose of gathering the desired data set. LiDAR detectors usually implement a photo-multiplier tube, or an avalanche photodiode in order to amass the photons for recognition. Detectors may be designed to obtain returns which contain the full waveform, individual photon, or a discrete number of critical ranges.
lidardata interpretation, a family of solutions is generated in the framework of this approach. Specifically, a series of so- lutions is generated using different initial guesses, different aerosol assumptions and different settings of a priori con- straints. Each single solution is obtained using the regulariza- tion technique. Then the individual solutions corresponding to the smallest residuals are averaged and the result of the av- eraging is taken as the best estimate of the aerosol properties. This approach has demonstrated the possibility of provid- ing rather adequate retrievals of aerosol properties. How- ever it is quite time-consuming, a fact that becomes an is- sue when large volumes of data need to be analyzed, as for example from an air- or space-borne lidar system. Installa- tion of MW lidars on air or space-borne platforms poses an- other problem: the retrieval algorithm should be more tol- erant to noise in the input data since reasonable averaging times are likely to be smaller for moving lidar systems. And finally, in the regularization approach described in (M¨uller et al., 1999; Veselovskii et al., 2002) at least five input opti- cal data (three backscatterings and two extinctions, so called 3β + 2α) are needed to retrieve the particle size distribution (PSD), but in many applications it would be highly desir- able to decrease the number of optical channels. So, the development of the approach permitting the reliable esti- mation of particle bulk properties such as volume, surface density and effective radius from a reduced number of opti- cal channels would be an important improvement. One way to assess this possibility is to attempt to approximate the bulk properties by a linear combination of the input opti- cal data (extinction and backscattering). The correspond- ing weight coefficients can be determined by expanding the PSD in terms of the measurement kernels (Twomey, 1977). Thomason and Osborn (1992) used this approach to estimate aerosol mass with the multiwavelength SAGE II extinction kernels. The interpretation of lidar measurements using the linear estimate techniques was explored in early studies by Chaikovskii and Shcherbakov (1985). The potential of this approach for treating elastic-Raman multiwavelength lidar measurements was studied by Donovan and Carswell (1997) under assumption of known refractive index. The technique was further explored in recent publications (De Graaf et al., 2009, 2010) where different aerosol models were used to in- vert optical data without prior information about the particle refractive index.
Basic processing of a lidar point cloud is the classification as ground/off-ground points, which is generally associated to the resampling of the data on a regular grid. Due to the accurate geometry of a lidar point cloud, many algorithms have been developed to automatically separate ground points from off-ground points (Sithole and Vosselman, 2004). Most of these approaches have good results when the topography is regular, but remain unperfect in case of mixed landscapes and steep slope conditions: parameters of the algorithms are often difficult to tune and do not fit over a large area. When ground points are mis-classified as off-ground points, the ac- curacy of the DTM may decrease (it depends on the spatial resolution and on the interpolation method). Inversely, when off-ground points (vegetation or man-made objects) are con- sidered as ground points, the DTM becomes erroneous which can bias hydrological models. Vegetated landscapes with sparse vegetation in a mountainous area (alpine landscape) are particularly interesting for the study of natural hydrology and the phenomenons of erosion (cf. Section 4). Neverthe- less, the processing of such landscapes need strong human interactions to correct the classification: since the detection of off-ground points of most algorithms are based on the de- tection of local slope changes, it may occur that the terrain (e.g., mountain ridges) shares the same properties. The DTM can therefore be over or under-estimated on certain areas de- pending on the algorithm constaints.
Among efficient techniques, some directly exploit the 3D point cloud structure , ,  while in many applica- tions the point cloud is first binned into a 2D regular grid (rasterization process) on which computer vision approaches can be applied (see e.g. ). Apart from some specific ap- plications where LiDAR points are fused with other data (e.g. hyperspectral images –, ), most techniques consist in computing features to describe the point clouds, before using such features to classify the scene under study. While first works have been focused on the characterization of single points (often through height and intensity) without including information related to their neighbours , more advanced approaches have included spatial relationships using a set of spheres or cylinders (of variable radius) around each point to extract consistent geometric features , , . In this context, multiscale local 3D features (main orientation, variability around each point, . . . ) have proven their efficiency to classify LiDAR scenes (see e.g. ). Even if it is very efficient, the sphere used to assess the neighbourhood of points is isotropic (no orientation is promoted) which is not optimal since the geometry of objects is not taken into account. Therefore other multi-scale approaches have been proposed on LiDAR DEM, such as the popular attribute profiles  that produce a multiscale description of the pixel and his surround- ing , ,  before proceeding to the classification. The main idea behind is to compute multi-scale spatial features by taking into account the geometry of the scene. In this work, we suggest to explore various information derived from the LiDAR point cloud within the framework of attribute profiles.
The most well known ground based AOD retrieval techniques are sun photometry and LIDAR. The sun photometry is a passive optical system that measures the extinction of direct beam radiation in distinct wavelengths and retrieves the aerosol contribution from total extinction. The LIDAR is an active optical system which transmits light in to the atmosphere and then collects the backscatter light signal to retrieve the aerosol attenuation in total columnar atmosphere. The CIMEL sun-photometer used by NASA is calibrated by the Langley technique. MICROTOPS II Sun photometer is a 5 channel hand-held sun photometer for measuring the instantaneous aerosol optical depth. The Sky Radiometer, manufactured by Prede Co. Ltd, Japan utilizes the software SKYRAD.PACK for the PC interface and data analysis. A number of ground based observatory-networks have been established worldwide to measure the aerosol optical depth. AERONET of USA, CASRNET of China, ARFINET of INDIA are the few examples. All these scientific instruments need computer interface softwares specifically designed for them.
Received: 16 November 2015 – Published in Atmos. Chem. Phys. Discuss.: 15 December 2015 Revised: 31 May 2016 – Accepted: 6 June 2016 – Published: 5 July 2016
Abstract. Optical and microphysical properties of differ- ent aerosol types over South Africa measured with a multi- wavelength polarization Raman lidar are presented. This study could assist in bridging existing gaps relating to aerosol properties over South Africa, since limited long-term data of this type are available for this region. The observations were performed under the framework of the EUCAARI cam- paign in Elandsfontein. The multi-wavelength Polly XT Ra- man lidar system was used to determine vertical profiles of the aerosol optical properties, i.e. extinction and backscat- ter coefficients, Ångström exponents, lidar ratio and depolar- ization ratio. The mean microphysical aerosol properties, i.e. effective radius and single-scattering albedo, were retrieved with an advanced inversion algorithm. Clear differences were observed for the intensive optical properties of atmospheric layers of biomass burning and urban/industrial aerosols. Our results reveal a wide range of optical and microphysical pa- rameters for biomass burning aerosols. This indicates prob- able mixing of biomass burning aerosols with desert dust particles, as well as the possible continuous influence of ur- ban/industrial aerosol load in the region. The lidar ratio at 355 nm, the lidar ratio at 532 nm, the linear particle depolar- ization ratio at 355 nm and the extinction-related Ångström exponent from 355 to 532 nm were 52 ± 7 sr, 41 ± 13 sr, 0.9 ± 0.4 % and 2.3 ± 0.5, respectively, for urban/industrial aerosols, while these values were 92 ± 10 sr, 75 ± 14 sr, 3.2 ± 1.3 % and 1.7 ± 0.3, respectively, for biomass burn-
However, care must be taken when using elevation as one parameter for the classification. As mentioned before, misclassification patterns occurred in the final tree species maps based on the relief of the study area. Therefore, it is important that the training and test data set covers all possi- ble terrain-specific variations of the occurring tree species. This means that reference data must consider spatial signatures and the training and test sites must - if possible - be evenly distributed over the whole study area. Due to the limited accessibility of big areas in the National Park, the in situ data collection for this analysis mainly focussed on the lower elevations of the southern part, i.e. the Rachel-Lusen region. This may have led to misclassification especially in the northern im- age. A classification based on training data acquired in the southern part may not be able to repro- duce the forest structure of the northern part of the BFNP, i.e. the Falkenstein-Rachel region, which is still strongly characterized by forest management in contrast to the Rachel-Lusen region (13).
conclusions. This is due to the spatial resolution of the sensors. The 3 and 5 m pixel sizes of Planetscope and VENuS, respectively, make it difficult to correctly classify trees surrounded by other land cover. Pixels containing the tree crown likely also contain spectra from surrounding features such as grass, roads or buildings. This affects the spectral response of the pixel, sometimes enough to no longer clearly belong to its proper class. Changing foliage in clusters of deciduous trees is clearly detected, with a moderate change in green reflectance and a large decrease in NDVI reflectance. However, there is less change to pixel values for many trees along residential streets. These trees are surrounded by grass so during periods without leaves, the pixel value may be influenced by reflectance off of grass visible through the bare branches. In April, grass already has fairly high reflectance in green and NIR, while deciduous trees are characterized by lower values in these wavelength ranges during that time. Trees also overhang roads and
Acknowledgements. All authors thank Carlos Rodrigues for his great input to this research. We also acknowledge the work of José Laginha Palma and Jose Carlos Matos. Their great contribu- tion during all phases of the field campaign helped to make the campaign a success. We also thank Mike Courtney for his work in the planning and design process of the campaign. We would like to thank Nikolas Angelou for the operation of the short-range WindScanner system and his help with the data processing and Guillaume Lea, who helped to process the data of the long-range WindScanner system. Furthermore, the authors acknowledge the FarmOpt project for the financial support for the Perdigão 2015 field campaign. FarmOpt (http://energiforskning.dk/en/projects/detail? program=All&teknologi=All&field_bevillingsaar_value=&start= &slut=&field_status_value=All&keyword=FarmOpt&page=0, last access: 13 September 2018) was funded by the Danish Energy Technology Development and Demonstration Program (EUDP), project no. 64013-0405. We thank the Danish Energy Agency for funding through the New European Wind Atlas project.
The development of remote sensing technologies has given researchers the opportunity to work with unprecedented volumes of information on large areas of forest or other terrain. However, the high dimensionality of large datasets, and of hyperspectral datasets in particular, introduces new challenges into the process of analyzing and utilizing these data. The Hughes phenomenon describes the problem of decreasing predictive power of additional variables that contain information on a fixed number of known classes. In the case of tree species prediction, hyperspectral data on a forest for which the researcher has information on only a small number of ground truth areas might be more redundant than insightful (Dalponte et al. 2009). For this reason, machine learning and data mining techniques for dimensionality reduction and pattern finding are often employed in species classification studies, as well as for predictive models of species distributions or habitat suitability. Data mining methods are designed to take advantage of cases where a few known cases or ground truths are being used to characterize a larger area or dataset. For this reason, they are recommended above, for example, linear regression models when attempting to find explanatory patterns in data without preexisting rules or assumptions (Franklin 2009).
A filtering algorithm was proposed that is applicable to areas where flat and hilly areas are interspersed. The proposed algorithm is based on a slope-based morphological filtering approach. A slope parameter used in the proposed algorithm is updated after an initial estimation of the DTM, and thus local terrain information can be included. Because of this update, extraction of GPs and the final DTM are improved. During the interpolation procedure when generating the DTM, bodies of water are masked to prevent incorrect GP selection near rivers and bridges. Validation of the results against the ISPRS benchmark datasets indicated that the accuracy of the proposed algorithm is greater than that of TerraScan for most samples, and is almost equal to that of Mongus and Žalik’s algorithm. Qualitative comparison using the study area data show that the proposed algorithm extracted a greater number of GPs on narrow streets than TerraScan did. In the case of dense urban areas, the overall classification accuracy of the proposed algorithm was approximately equal to, or higher than, that of TerraScan. Therefore, it is concluded that the proposed filtering algorithm performs GP extraction and DTM generation effectively for urban areas. In future work, the 3D modeling of buildings in dense urban areas will be reported, based on the proposed filtering algorithm.
For the specification of terrain surface products the Dig- ital Terrain Elevation Data (DTED) levels from 1 to 5 have been suggested. For HN simulations especially the resolu- tion and accuracy levels are of interest. Level 2 is fulfilled by DTMs with a horizontal resolution (grid edge length) of 1 00 (approx. 20–30 m) and an absolute vertical accuracy of 18 m (90% error bound). It may be reached by spaceborne Interferometric Synthetic Aperture Radar (InSAR) using the X-band (e.g. SRTM, Rabus et al., 2003), which is, however, scattered back at the tree crowns. Level 3, also referred to as High Resolution Terrain Information (HRTI-3), with 12 m resolution and 10 m accuracy, is aimed at by new satellite InSAR missions (Krieger et al., 2007). HRTI-4 (6 m resolu- tion, 5 m accuracy) can be achieved by airborne InSAR and stereo photogrammetry (Kraus, 2007). At this level it makes a notable difference if the ground surface is recorded or the first visible surface from the bird’s perspective (tree crowns, roofs, etc.). Short wavelength radar and passive optical imag- ing can provide high resolution, but cannot take measure- ments from the ground surface below the forest cover (Balt- savias et al., 2008). HRTI-5 can currently only be provided by airborne LiDAR for larger areas in a cost efficient manner and prescribes a resolution of 1 m, an absolute vertical accu- racy of 5 m, and a relative (point-to-point) precision of 25 cm. Especially the accuracy requirements are the fact advocating for airborne LiDAR.
In this study, we first calculated surface elevation for ground and/or cloud tops usinglidar range measurements, pointing angle and aircraft altitude. The measurements showed the relative surface elevation change from one data point to the next increases with flight distance. During a flight in the 2014 campaign, one flight was made over the Pacific Ocean near the California coastline with low winds. The li- dar range measurements made at 10 Hz show 0.5 m standard deviation in the relative surface elevation changes, as shown in Fig. 3. The standard deviation of the relative surface eleva- tion changes increased to about 1 m after measurements were averaged over 5 s or 1 km horizontal distance. Although the data aggregation before retrieval can increase SNR and im- prove retrieval precision for flat surfaces, over rougher sur- faces like mountains there can be more variation in the pho- ton path length, which can limit the data averaging time be- fore retrieval. Since surface roughness and XCO 2 variations
ponent (α) values, and LR and depolarization (δ) ratios. A deeper synergy between the lidar and sun/sky-photometer data is achieved in the GARRLiC algorithm developed at LOA (Lopatin et al., 2013). GARRLiC inverts the coinci- dent lidar and sun/sky-photometer radiometric data simulta- neously. The other marked distinction between GARRLiC and LIRIC is the inversion of two distinct aerosol modes, which makes it possible to retrieve aerosol optical and mi- crophysical properties independently for both the fine and coarse modes. Such differences in the algorithms can influ- ence the results obtained by the two systems. The GARRLiC method is based on the Dubovik inversion code (Dubovik and King, 2000; Dubovik et al., 2011), which has been pre- viously used for processing AERONET data. The synergistic retrieval is expected to improves aerosol retrieval properties; the lidar observations are expected to improve the observa- tions of the columnar properties of aerosols in the backscat- tering direction, and sun/sky photometers provide informa- tion on aerosol properties, such as their amount or type, re- quired for lidar retrievals without making assumptions based on climatological data.
this study investigates the possibility of classifying soil-landforms in urban areas based on analysis of geomorphometric features derived from very high-resolution LiDARdata. This classification applied in this study is based on an object-based image analysis approach (OBIA) instead of crisp classifications to extract morphometric parameters from DEMs. This classification is known as knowledge-based fuzzy classification [6,24]. This fuzzy classification has achieved better results than other classifications in organizing the segmentation of the land surface and the classification of object classes in a hierarchical manner [6,24]. Moreover, when classifying landforms in a specific geographical area, a trade-off between the delineation of landforms in this area and their classification is required . This is possible in the OBIA approach . The study also attempts to show whether it is sufficient correlation between landforms and the distribution of soil types in these areas as well as to show whether there are specific advantages for this classification by usingLiDAR-DEM instead of other DEMs, which their use bases on the classification of pixels. This is done through a comparative geomorphometric analysis of LiDAR-DEM with a resolution of 1 m and ASTER-DEM with a resolution of 30 m. To analyze the LiDAR-DEM as compared to the ASTER-DEM for the soil- landforms classification, the study area was reduced to the area of the topographical chart Berlin– Rüdersdorf with a scale of 1:25,000. Such investigations on soil-landforms classification help to improve the understanding of the effect different factors on soil formation and distribution in the urban areas and to improve understanding of the soil-landscape relationship. All of which ultimately improve traditional surveys of soils in order to generate soil maps at a reasonable time and achieve better and more usable results.
As a range-resolved active remote-sensing instrument, LIDAR has been used to get atmosphere information for correcting cosmic rays observatory data. Several lidar systems have been used in cosmic rays observatory. Most of them are elastic lidar with UV or visible wavelength. A single-wavelength elastic micro-pulse LIDAR system, operating at 532 nm wavelength, has been operated along with MAGIC telescopes. Atmospheric corrections has been used in the MAGIC data analysis, which make it possible to extend the effective observation time of MAGIC under adverse atmospheric conditions and reduce the systematic errors of energy and flux in the data analysis . Another single-wavelength elastic lidar, operating at 355 nm wavelength, has been used to get atmosphere information in the measurements of the cosmic-ray air- shower fluorescence . Due to the assumption of lidar ratio in the inversion of atmospheric parameters, single-wavelength elastic lidar can not get atmosphere information precisely. Raman lidar and high spectral resolution lidar (HSRL) can measure the atmosphere parameter precisely without any assumption. A Raman lidar, operation at 355 nm wavelength and three receive
Although optical imagery has proved to be successful for land cover classi ﬁ cation, only data acquired daytime and with cloud- free conditions can be effectively used for such purpose. In contrast, active sensors such as synthetic aperture radar (SAR) and light detection and ranging (LiDAR) operate in both day/ night conditions and utilize timing information of the outgoing pulse and incoming reﬂected energy to provide vertical informa- tion. Operating in the microwave portion of the spectrum, SAR is also an all-weather sensing technology and has capability to penetrate vegetation, while the small footprint and pulse rate of LiDAR allow it to penetrate gaps in canopy. Recently, the use of multi-sensor data for performing land cover classiﬁcation – has gained signiﬁcant attention in the remote sensing community due to the capability to exploit complementary information — spectral information from the optical sensors and structural information from the active sensors—to improve the overall classi ﬁ cation accuracy of the land cover map.
We used repeated airborne LiDAR acquisitions from 2012 (30 July 2012 and 04 September 2012), 2013 (10 September 2013), 2016 (22 March 2016 and 23 March 2016), and 2018 (14 October 2018) to quantify forest structure (e.g., canopy height), carbon stocks, and related temporal changes (forest dynamics). The LiDAR campaigns were performed from a maximum flight altitude of 800 m. The acquisitions covered approximately 670 ha. The multi-temporal information on forest structure allowed for the quantification of changes in height and carbon stocks at large scale, related to processes of tree growth and mortality. The overlap of the flightlines was between 65% and 70% with a field-of-view ranging from 11 ◦ to 15 ◦ . The average laser point density, expressed by the number of laser returns in points per m 2 (ppm2), was between 27 and 38.9 ppm2. ALS data were acquired by a Brazilian ALS company (GEOID Ltd) using an Altm3100/Optech laser sensor. The data were provided by United States Agency for International Development (USAID) and the Empresa Brasileira de Pesquisa Agropecuária (Embrapa) through the Sustainable Landscapes project ( http://mapas.cnpm.embrapa.br/paisagenssustentaveis/ ). LiDAR point cloud data were processed using LASTools version 160.905 [ 42 ] and the FUSION software [ 43 ]. Canopy Height Models (CHMs) for each flightline were built by subtracting the Digital Terrain Model (DTM) from the Digital Surface Model (DSM) at 1-m resolution using the R programming language [ 44 ]. For each year, ALS data were used as an input to the parametric Aboveground Carbon Density (ACD) model to estimate carbon stocks (Longo et al. 2016). The ACD model is based on the mean top of canopy height (CHM), producing ACD estimates (Mg·C·ha −1 ) of live trees in a minimum size area of 0.25 ha. The model excludes standing dead trees, which allows the comparison with pan-tropical maps [ 8 ]. As the LiDAR-derived allometric equation requires only the CHM, it allows us to directly understand the relation between forest height and carbon stocks. In order to use the CHM as an input to the model, all the CHMs were resampled to 0.25 ha grid size to match the spatial resolution of the model. ACD was calculated by Equation (1):
it has been confused with the buildings class. Obviously, the LIDARdata has discriminated the motorway as ground by the SVM when classifying with multispectral imagery adding LIDAR. On the contrary, in the central part of Figure 5 (a) we can see several buildings detected in Figure 5 (b) while in Figure 5 (c), these same buildings have been wrongly taken for artificial ground. Furthermore, the buildings detected in Figure 5 (c) are fuzzier than those in Figure 5 (b); again, the LIDARdata has allowed sharper discrimination by considering height. Visits to the terrain prove that the classification in Figure 5 (b) distinguishes the trees class from the shrubs and lawn class better than the classification in Figure 5 (c). For instance, note the red diagonal of Figure 5 (c) and the cross form at the middle left side of the image to the middle upper side. See Figure 5 (b) and the roundabout at the upper right corner. Figure 5 (b) correctly detects shrubs and lawn (with two trees taken also correctly by two of the red points), while 5 (c) has wrongly mistaken the objects as mostly trees.
4. RESULTS AND DISCUSSION
Application of only PCA showed that roads, parts of buildings as well as some parts of thick tree trunks are classified as planes. Tree leaves and corners of windows are classified as scattered, volumetric points. Rest of the points are classified as linear representation. Some parts of facades (corners of windows and buildings) are classified as trees, also some parts of tree trunks classified as facades, and many such inaccuracies (Figure 6). Also, the PCA classifies some surfaces as planar which in fact may be a road or a building facade or a car surface. To further improve classification, feature vectors for a dynamic neighbourhood of .5 m are developed and are assigned into the training model. Using python script, running the program and classification took more than 12 hours and sometimes the well configured computer would hang owing to large amount of data. To reduce the time as well as to increase the accuracy, data mining proved to be efficient. After applying random forest algorithm, the time taken to classify the urban scene reduced to almost half. Random forest ran efficiently on the large database and handled thousand of input variables without variable deletion