Partly because of t his, and the fact that there are glaciers with very little liquid water present, radar methods are believed to be worthy of further research[r]

57 Read more

Our interest was to retrieve surface backscatter;in this study, images with VV polarization were utilized. The data was initially pre-processed for orbit, radiometric and geometric corrections. Co- registration is applied using external DEM, i.e., SRTM 90 m. After taking the subset of the area of interest,the geometric correction was completed. After geometric correction due to side-looking characteristics of SAR image, some distortions could be formed. Layover shadows can be reached 3% among glacier areas in the current research area. Finally, the intensity values were converted into dB.Then, theRGB composite of the SAR data of three different seasons was generated for each year under consideration. In the northern slope of the Kyrgyz Ala-Too range, the peak of dry snow occurs at the end of January and the start of February, wet snow in May, and the peak of glacier melting at the end of August. The different tonal variations in the backscattering of composite images represent different glacier **radar** zones (Mahagaonkar, 2019). Therefore, the images from spring (wet snow facies may occur), summer (peak of glacier melting), and winter (peak of dry snow accumulation) seasons, when glacier surface remains in different backscatter conditions, were used. The spring image was used in Red, summer in Green, and winter for the Blue channel to generate RGB composite.

Show more
11 Read more

Jo urnal rifG lacio logy
ent datasets are well described by l inear fits, but with differ ent slopes. For shallow ice of large ice sheets, a linear rela tionship between depth and age, t, is a very reasonable approximation. Therefore, the Figure 3 data seem to sup port Equation (I). However, the constant J{ varies w i t h the method used to evaluate the mean squared grain-si z e up to a factor of about 2 (see Table I). Moreover, if one extends the linear fit for A50 up to the surface (0 m), a much larger "initial" value is found compared to the A or [} **methods**. This results from a bias introduced by a representativity evolving with mean grain-size (see above). The relat ionship between A and the mean cross-sect ional area of the 50 larg est grains, A50, is not trivial and obviously depends on size d istribution, which could change over t ime and from site to site ( because of temperature and climatic d i fferences ) . During normal grain growth, t he shape of the grain-size d istribution is not supposed to change, and some authors have proposed log-normal distributions to describe experi mental data or results of computer simu lation (see, e.g., Atkinson, 1988; Anderson and others, 1989). Such a distribu tion, however, so far has no theoretical basis. Moreover, the peak value and t he standard deviation of the distribution change over time. Therefore, an estimation of activation energy for grain growth by comparison of grain-growth kinetics, deduced from the A50 method, at different sites with different mean annual temperatures (see, e . g. , Gow, 1969), has to be made with cauti on. As mentioned above, an additional problem of the A50 method comes from the increasing representativity of the 50 largest grains as the mean grain-size increases. In the present analysis, this representativity increased from about 10% at 100 m to 25 % at 360 m, which is less than in t he Gow (1969) analysis ( �25 % ). To examine a possible effect of such increasing representativity, we computed the mean cross-sectional area of the largest 25% of grains in each section, A25% (see Fig. 3). Wit h this method, the ext rapolation of the l i near fit up to 0 m is very close to that found with the A or L2 **methods** (see Table I), and 75 % of the available information is still lost.

Show more
228 Read more

KEY WORDS: SAR, polarimetry, backscattering, classification, snow, snow mapping
ABSTRACT:
Conventional studies to assess the annual mass balance for **glaciers** rely on single point observations in combination with model and interpolation approaches. Just recently, airborne and spaceborne data is used to support such mass balance determinations. Here, we present an approach to map temporal changes of the snow cover in glaciated regions of Tyrol, Austria, using SAR-based satellite data. Two dual-polarized SAR images are acquired on 22 and 24 September 2014. As X and C-band reveal different backscattering properties of snow, both TerraSAR-X and RADARSAT-2 images are analysed and compared to ground truth data. Through application of filter functions and processing steps containing a Kennaugh decomposition, ortho-rectification, radiometric enhancement and normalization, we were able to distinguish between dry and wet parts of the snow surface. The analyses reveal that the wet-snow can be unambiguously classified by applying a threshold of -11 dB. Bare ice at the surface or a dry snowpack does not appear in **radar** data with such low backscatter values. From the temporal shift of wet-snow, a discrimination of accumulation areas on **glaciers** is possible for specific observation dates. Such data can reveal a periodic monitoring of **glaciers** with high spatial coverage independent from weather or glacier conditions.

Show more
On the other hand, the glacier volumes calculated form GPR-retrieved ice-thickness data are also useful to calibrate the free parameters of the physically-based models relating the ice-thickness distribution with the glacier topography, mass balance and dynamics.
These models are the alternative to the V-A relationships to calculate the volume of large populations of **glaciers**. Moreover, as mentioned earlier, these two families of **methods** are a usual tool for regional and global projections of mass losses from **glaciers**, and corresponding sea-level rise, in response to climate scenarios. Finally, the estimate of the total volume of the entire population of **glaciers** on Earth provides the knowledge of its potential contribution to sea-level rise and allows predicting for how long **glaciers** will continue to be the largest contributors to sea-level rise.

Show more
16 Read more

3 Technical speci®cations
Experience with the high-power HF system at Tromsù gives us a reasonable idea of what power levels are required to achieve most of the capabilities outlined already. In particular, experiments with the Tromsù heater have demonstrated that arti®cial **radar** aurora can be produced at power levels of as low as 10 MW of ERP. The ionospheric scatter for the all-sky **radar** also has a low power requirement; indeed for such operation SPEAR would transmit an ERP comparable to that of the existing SuperDARN radars, of 0.4 MW. However, arti®cial ULF sounding, ®eld guided wave injection, and nonlinear interactions need powers of several times that level. With this in mind, a phased approach has been adopted for construction and deployment of SPEAR which will allow the system capabilities to evolve. A modular system of distributed transmitters is envisaged as the best approach. This allows the construction of a system to deliver 28 MW ERP, with fewer antennas and transmitters than are required for the full system. An ERP of 28 MW would be sucient to generate arti®cial **radar** aurora and permit operation of SPEAR as an

Show more
2 Assistant Professor, Computer Department, iNurture, R.K. University, Rajkot, Gujarat, India
Abstract: Resolution often defines the quality of the image. Resolution also defines the clarity of graphics which simply means, as the resolution is higher, the graphics becomes clearer which is the primary requirement needed in recent applications. Super- resolution **methods** have become a significant research area due to the swiftly growing interest for high quality graphics in several computer visions and pattern recognition approaches which has lead to the invention of various SR **methods**. According to the number and sequence of graphics we input, two kinds of super resolution **methods** could be distinguished: single or multi- input based **methods**. Certainly, processing multiple inputs could lead to an interesting output. It can be accomplished by use of good sensors and optics, but it’s very expensive and also limits the way of pixel density within image. Alternately we can use image processing **methods** to obtain high resolution graphics from low resolution graphics which can be very effective and reasonable solution. This form of graphics image improvement is called super resolution graphics image reconstruction. Image super-resolution reconstruction is to use one or group of degraded graphics to produce a high resolution graphics, to overcome the limitation or ill-posed conditions of the image acquisition process to achieve better content visualization. Super Resolution is most often useful in forensic imaging, where the extraction of minute details in a graphics can help to tackle a major crime cases.

Show more
sulting from choosing a low r value) then the marginal cells with h f = h ga receive more weight, and the result is a shal- low glacier cross section as illustrated in Fig. A1 for the case r = 0.1. If the interpolation involves a high ratio of random to marginal cells (this is the case when the chosen value of r is large), then the random grid cells with hf hga receive more weight. Hence a cross section with very steep side walls results as shown for the case r = 0.9 in Fig. A1. For these reasons the parameter r needs adjustment to achieve reason- able cross sections of valley glacier tongues. Different values for r were tested to achieve cross sections similar to the ide- alized ones. Best agreement resulted at r = 0.3. In Fig. A2 GlabTop2 cross sections using the above settings are com- pared to the idealized cross sections. The agreement is not perfect; the cross sections of wide **glaciers** resemble power functions with b values clearly exceeding 2.5. A better agree- ment could be achieved using a lower r value, but the draw- back would be a general underestimation of ice thickness in narrower glacier sections.

Show more
21 Read more

Very recently, the performance of the LM algorithm that was published in Publi- cation P3 was compared to three other **methods** [16]. The other **methods** were 1) the Park’s method (or linear demodulation as named by the authors; this should not be con- fused with the PCA method in Chapter 5.1.2, which is usually the method referred as linear demodulation), 2) the least squares (LS) method, and 3) the compressed sensing (CS) method (the same as the L1-norm-based method). The comparison was performed in terms of accuracy and computational complexity. The accuracy was quantified with the sum of the squared error (SSE) values, or to be precise, with the mean squared er- ror (MSE) values as the SSE expression was divided by N, the number of data samples. They found that “the SSE values verify that the LM method has the best accuracy,” while the accuracy of the CS method is very close. This is an important acknowledgment from an independent and leading team in the field. Moreover, it is good to know that the method seems to work in practice in a rather different application. The paper studies the **radar** monitoring for structural health monitoring of bridges and other civil infras- tructures. The purpose is to implement the center estimation **methods** in wireless sensor networks (WSN). Also in Publication P7, it was demonstrated that the LM algorithm works with real data. The study consisted of whole-night recordings with three patients in an uncontrolled environment. However, the results of Publication P7 are revisited in Chapter 6.1. The paper [16] raises the concern of a slow convergence of the LM al- gorithm, which is an important issue. However, there may be some differences in the implementation of the LM algorithm. In [16], the LM algorithm converges with around 775 iterations. Based on a couple of runs with real respiration data, our implementation of the LM algorithm converges in approximately 7 iterations (mean = 6.7 iterations; std = 1.1 iterations). Moreover, according to our experience, if the LM algorithm does not converge in roughly 20 iterations, it will not produce good results.

Show more
160 Read more

Ice-thickness data were interpolated into a regular grid using ordinary kriging. Kriging is a geostatistical interpolation method in which the value at an unobserved location is predicted by weighted average of the values at surrounding locations, with weights selected according to a model fitted to a variogram that describes the spatial correlation of the data (Cressie, 1993). The use of kriging interpolation differs from a previous paper (Martín- Español et al., 2013), where we used the Gridfit routine available for MATLAB (D’Errico, 2006), choosing the triangular interpolation method and leaving the default values for the remaining parameters (smoothness = 1, regularizer = gradient, solver = normal). Gridfit is an approximation method (as opposed to interpolation) closely related to thin-plate spline approximation. When Gridfit is used to create soft surfaces, some of the field data can be ignored, resulting in large deviations of the interpolated surface from the individual measurements, largest in the roughest areas. This happens because Gridfit is a robust method to outliers and duplicates, which means that it ignores data that strongly deviate from the mean trend of the data set. However, this can constitute a source of error if such data are not detected and deleted a priori from the data set. Otherwise, the measurement error will be calculated from an improper data set, and the interpolation error estimate will also be inadequately influenced. Moreover, rough bedrock areas that have not been densely sampled will be seen as outliers and will be wrongly softened. In contrast to Gridfit, kriging is able to get reasonably smooth surfaces without losing reliability. In fact, it allows calibrating the maximum degree of deviation from the measurement data through a parameter called nugget (a selection of nugget different from zero means that kriging is no longer an exact interpolator). Kriging also provides a closer fit to the glacier physical conditions, allowing for anisotropy, through its ability to account for a higher autocorrelation in one direction than in another. A more detailed comparison between kriging and Gridfit, as concerns their suitability to calculate glacier volumes from GPR ice-thickness data, can be found in Martín-Español (2013, §3.6.1). It focuses on the capabilities of both **methods** to (1) work with sparse, error-containing and duplicate data; (2) deliver a smooth surface without losing reliability; (3) produce accurate ice-volume estimates on synthetic **glaciers** and ice caps; and (4) involve low interpolation errors.

Show more
14 Read more

A bstract
Subglacial materials play an important role in glacier dynamics. High pore-pressure, high porosity (dilatant) tills can contribute to high basal m otion rates by deforming. Amplitude Variation with Angle (AVA) analysis o f seismic reflection data uses the relationship between basal reflectivity and reflection incidence angle to characterize the subglacial material. This technique can distinguish between dilatant tills and less-porous, non-deforming (dewatered) tills due to their distinctive reflectivity curves. However, noise from crevasses and glacier geometry effects can complicate reflectivity calculations, which require a source amplitude derived from the bed reflection multiple. We use a forward model to produce synthetic seismic records, including datasets with and without visible bed reflection multiples. The synthetic data are used to test source amplitude inversion and crossing angle analysis, which are amplitude analysis techniques that do not require absolute reflectivity calculations. We find that these alternative **methods** can distinguish subglacial till types, as long as reflections from crevasses do not obscure the bed reflection. The forward model can be used as a planning tool for seismic surveys on **glaciers**, as it can predict AVA success or failure based on crevasse geometries from remote sensing data and glacier bed geometry from **radar** or from a worst-case-scenario assumption o f glacier bed shape.

Show more
85 Read more

Revised: 6 November 2018 – Accepted: 7 November 2018 – Published: 22 November 2018
Abstract. There is significant uncertainty regarding the spa- tiotemporal distribution of seasonal snow on **glaciers**, de- spite being a fundamental component of glacier mass bal- ance. To address this knowledge gap, we collected repeat, spatially extensive high-frequency ground-penetrating **radar** (GPR) observations on two **glaciers** in Alaska during the spring of 5 consecutive years. GPR measurements showed steep snow water equivalent (SWE) elevation gradients at both sites; continental Gulkana Glacier’s SWE gradient av- eraged 115 mm 100 m −1 and maritime Wolverine Glacier’s gradient averaged 440 mm 100 m −1 (over > 1000 m). We ex- trapolated GPR point observations across the glacier sur- face using terrain parameters derived from digital elevation models as predictor variables in two statistical models (step- wise multivariable linear regression and regression trees). El- evation and proxies for wind redistribution had the great- est explanatory power, and exhibited relatively time-constant coefficients over the study period. Both statistical models yielded comparable estimates of glacier-wide average SWE (1 % average difference at Gulkana, 4 % average difference at Wolverine), although the spatial distributions produced by the models diverged in unsampled regions of the glacier, particularly at Wolverine. In total, six different **methods** for estimating the glacier-wide winter balance average agreed within ±11 %. We assessed interannual variability in the spa- tial pattern of snow accumulation predicted by the statistical models using two quantitative metrics. Both **glaciers** exhib- ited a high degree of temporal stability, with ∼ 85 % of the glacier area experiencing less than 25 % normalized abso-

Show more
17 Read more

4.3 RFI Suppression 4.3.1 Introduction
Radio frequency interferences are always a possible problem for **radar** systems operating in VHF/UHF bands because these frequencies are also widely used by television, radio and wireless communications for many civilian and military applications. Signals from these emitter sources are referenced as RFI because they contaminate the spectrum of desired signals. In order to understand the RFI environment encountered by VHF/UFH **radar** systems, surveys were performed by Grumman E-2C throughout the world to characterize RF interferers in terms of the density of the interference emitters, the type of emitters, their effective radiated power, their modulation bandwidth, duty factor and their temporal dependence. It is found that even in remote locations the average interference power often exceeds receiver noise by many dB [43]. RFI may also come from other electronic systems that work together with the **radar** system, like the laser altimeter working together with the CReSIS **radar** systems. Furthermore, any hardware design flaws or improper operation of the **radar** will introduce RFI internally.

Show more
201 Read more

The interactions between target and the energy are typically distinguished by the dominant scattering method involved (Tomiyasu, 1978; Cloude, 1985). There are three main **methods**: smooth surface, single bounce, double bounce, and volumetric scattering. Smooth surface scattering occurs when the **radar** energy is incident on a smooth surface relative to the wavelength of the wave. The reflected angle is approximately the incident angle and the returned energy is near zero. Rough surfaces (relative to the wavelength) scatter in all directions. Some of this energy is returned to the sensor and is known as single bounce scattering. In general, rougher surfaces have higher backscatter, again depending on SAR wavelength. Double bounces occur when two smooth surfaces, one flat on the ground and the other vertical, combine to reflect at a very high intensity.

Show more
86 Read more

Before the introduction of game theoretic **methods** in **radar** systems, game theory has already been studied and provided some impressive results in wire- less communication networks. Many applications of game theory in wireless networks are used to solve similar problems that can also arise in **radar** net- works, such as resource allocation and beamforming. One popular technique to tackle the aforementioned problems is the non-cooperative game theory. Non- cooperative game theory is considered to be a dominant branch of game theory regarding wireless and **radar** networks, since it studies and models competitive decision making among several egoistic but intelligent players, with no com- munication or coordination of strategic choices. Hence, the aforementioned properties highly resemble the interactions between widely separated stations or radars in a fully autonomous, distributed communication or **radar** network, respectively. In a non-cooperative game, each player attempts to optimize its utility function selfishly by selecting the most appropriate strategy, without any communication among the players, and this move has an impact on the utility functions of the other players. It is important to highlight at this point, that partial cooperation is feasible in non-cooperative games, bearing under consideration that any cooperation assumed in the system is self imposed to each player without any communication or coordination among the players.

Show more
184 Read more

6. J. Lijfﬁjt, P. Papapetrou, and K. Puolamäki. The k-windows problem:
Finding the most informative set of window lengths. Forthcoming.
Publication 6 is an expanded version of Publication 5. The text has been partly rewritten to adhere to the style of the thesis. The initial hypothesis was formulated together with the co-authors Panagiotis Papapetrou and Kai Puolamäki. The current author has written most of the text of the articles, has developed and implemented the **methods**, has designed and conducted all the experiments presented in this thesis, and has analysed and interpreted the results of the experiments. The proof for an analytical solution in a special case was ﬁrst derived by Kai Puolamäki, and both Panagiotis Papapetrou and Kai Puolamäki have participated in writing, discussions and provided feedback during all stages of the research.

Show more
116 Read more

Environmental Concerns
Crewmembers of a distant **exploration** mission will face a level and duration of radiation exposure never before encountered in the history of space flight. Multiple authors identify radiation as a principal risk to crew health on an **exploration** mission. 4,18 ,19 During the 6-month journey between planets, nothing can protect the crew but their own spacecraft. 18 There remains no reliable way to forecast large radiation events; current technology enables less than 30 minutes of warning before a solar particle event. 18 Using a mission to Mars as an example, and assuming a 180-day surface stay and 65 EVAs (among the lower estimates encountered in the literature), the crew would face a 0.2% to 0.3% risk of a very large solar particle event delivering up to 1 Gray of radiation to blood-forming organs. 19 Though these percentages are small, they may constitute a risk of radiation-induced illness. Furthermore, the late effects of radiation exposure, likely to result from galactic cosmic radiation, occur months to years after exposure and manifest as an increased incidence of certain tumors, skin cancer, hematologic cancers, cataracts, tissue damage or mutations, and other symptoms. 19 Given a multi-year expedition, it is possible that astronauts may begin to show these late effects during the mission, with a potential operational impact.

Show more
32 Read more

Fig. 9 ParaView (3-D) SAR free-space image of 0.25-m radius PEC sphere
3.1 Seven-Strand Twisted Power Line
Figures 10 and 11 capture the distinct characteristics of the twisted power-line RCS as computed by both the exact method of moments algorithm of FEKO and the more approximate Xpatch, respectively. The results guided azimuthal sampling for the SAR imaging. Figures 12 through 28 show SAR **radar** images of the 2-m section of twisted strand power-line cable in either free space or suspended 3 m above ground. Free-space images are stated as such in the captions. The figures alternate between Xpatch’s Catalus SAR images and ParaView images and show returns for either vertical-vertical (VV) or horizontal-horizontal (HH) polarization. The Catalus images are 2-D SAR images and represent a horizontal slice of the 3-D data with the **radar** beam incident from the left in the figures. A magenta overlay of the cable facet model is present in the Catalus images when available. In the ParaView images, an overlay of the cable facet model is shown as a thin whitish line usually running through the hotspot of the **radar** image. As the **radar** incidence upon the cable turns in azimuth from nearly normal incidence, 89.6°, to 74.6°, the tiny facet line rotates by that 15° difference in the corresponding Catalus SAR images.

Show more
58 Read more

2.3. Time-Frequency Transforms
2.3 Time-Frequency Transforms
In general time-frequency analysis attempts to represent and manipulate a signal in time and frequency simultaneously. This is useful when observing signal whose fre- quency content changes over time, like music, speech, and the micro-Doppler signature from animal movements. Time-frequency analysis can therefore be thought of as a generalization of Fourier analysis which is most useful when the signal in question is stationary [4, p. 25]. There is a body of different techniques and **methods** to decompose a signal in both time and frequency. In this thesis two different time-frequency trans- forms is explored. The first is the well known short time Fourier transform (STFT) and the second is a parameter estimation performed with a algorithm known as RELAX [6]. Both **methods** work by having a sliding window extract a portion (in time) of the signal(s) and calculate the spectral contents of that portion. In this system two differ- ent window lengths are used, one for creating the range-Doppler map (more on that later) and one to track the desired target. First in this section the frame buffer that decides the window lengths is presented. Secondly what that information is contained in the windows is discussed in the subsection The Doppler Spectrum & range-Doppler map. After that the next subsection details the processing done in the Clutter removal step and the Calculate spectrogram step.

Show more
68 Read more

131 Read more