an infrared scintillometer, enabling estimation of large-area evapotranspiration across northern Swindon, a suburbanarea in the UK. Both sensible and latentheatfluxes can be ob- tained using this “two-wavelength” technique, as it is able to provide both temperature and humidity structure param- eters, offering a major advantage over conventional single- wavelength scintillometry. The first paper of this two-part se- ries presented the measurement theory and structure param- eters. In this second paper, heatfluxes are obtained and anal- ysed. These fluxes, estimated using two-wavelength scintil- lometry over an urban area, are the first of their kind. Source area modelling suggests the scintillometric fluxes are rep- resentative of 5–10 km 2 . For comparison, local-scale (0.05– 0.5 km 2 ) fluxes were measured by an eddy covariance sta- tion. Similar responses to seasonal changes are evident at the different scales but the energy partitioning varies be- tween source areas. The response to moisture availability is explored using data from 2 consecutive years with contrast- ing rainfall patterns (2011–2012). This extensive data set of- fers insight into urban surface-atmosphere interactions and demonstrates the potential for two-wavelength scintillometry to deliver fluxes over mixed land cover, typically representa- tive of an area 1–2 orders of magnitude greater than for eddy covariance measurements. Fluxes at this scale are extremely valuable for hydro-meteorological model evaluation and as- sessment of satellite data products.
Hill et al. (1988) outlined the two-wavelength theory and demonstrated the viability of the technique by comparison with structure parameters and fluxes from other microm- eteorological methods at a very homogeneous agricultural site. The technique has since been used to estimate heatfluxes over a vineyard (Green et al., 2000), pasture (Green et al., 2001), mixed agricultural landscapes (Meijninger et al., 2002a) and mixed agriculture with complex topogra- phy (Evans et al., 2010). The heavily instrumented LITFASS campaigns (Beyrich and Mengelkamp, 2006) focus on het- erogeneous land cover, mainly mixed agriculture. During LITFASS-2003, heatfluxes from aggregated eddy covariance (EC) data compared with two-wavelength scintillometry es- timates indicated generally good performance over the 11- day measurement period (Meijninger et al., 2006). In addi- tion, the bichromatic-correlation method was developed and tested (Beyrich et al., 2005; Lüdi et al., 2005). This exten- sion to the two-wavelength method enables a path-averaged measurement of the temperature–humidity correlation coef- ficient, thus removing the need to assume its value as with the traditional two-wavelength method. In the overview pa- per of LITFASS-2009 (Beyrich et al., 2012) structure pa- rameters from two example days, 12–13 July 2009, are pre- sented. Two-wavelength scintillometry remains a specialist technique at present, as only a few instruments of longer wavelength exist, although a commercially available system has recently been developed (Hartogensis et al., 2013). A three-wavelength method has also been proposed as a solu- tion to obtaining the combined temperature–humidity fluc- tuations, but would require at least three scintillometers and sensitivity analyses suggest results may not be significantly improved over the two-wavelength method (Andreas, 1990). Single-wavelength (large aperture) scintillometry is now a fairly well-established technique deployed in a variety of locations, although its use in urban environments has only developed recently. A 3-week campaign in Marseille demonstrated good agreement between scintillometer and EC sensibleheatfluxes over the “reasonably homogeneous” built surface (Lagouarde et al., 2006). A much longer cam- paign in Łód´z looks at diurnal and seasonal variability (Zieli´nski et al., 2012). Results from Swindon using the
The performance of EC and scintillometry differs with at- mospheric conditions. EC data from open-path gas analysers cannot be used if the instrument windows are wet, such as during and after rainfall (Heusinkveld et al., 2008). Conse- quently, water vapour measurements from open-path systems significantly under-represent these times and may result in an appreciable underestimation of mean Q E (Ramamurthy and Bou-Zeid, 2014) and C q 2 . Mean values for C T 2 calcu- lated using daytime data only when all quantities are avail- able concurrently (i.e. C T 2 and C q 2 from EC, two-wavelength and bichromatic-correlation methods – grey bars, Fig. 9) are larger compared to mean values calculated using all available daytime data (coloured bars) due to the exclusion of periods during and directly following rain (when C T 2 is typically rel- atively low Q E is relatively high). In September 2012 the opposite effect is seen because the EC data were limited by dirty IRGA windows when the weather was dry and sunny; hence it is generally high C T 2 values that are eliminated in this case. During summer, monthly mean C q 2 _BLS–MWS is reduced slightly for the concurrent data set (grey bars) as times of high Q E when water and energy are plentiful have been ex- cluded. The overall results are not substantially changed for the restricted subset: C T 2 _BLS–MWS and C T 2 _EC are still simi- lar, whilst C q 2 _BLS–MWS still exceeds C q 2 _EC , although the dif-
ing the day when both turbulent fluxes are reasonably ex- pected to be positive and EC data do not suggest otherwise). The reason for this is unknown, but two possible explana- tions are considered here. Firstly, Taylor’s frozen turbulence hypothesis is assumed in order to relate intensity fluctuations to C n 2 (Clifford, 1971; Wang et al., 1981). When crosswind speeds are low, this assumption is less justified and eddies de- cay as they are slowly blown through the beam. Correlation between the received scintillation pattern from one sample to the next is reduced compared to higher crosswind cases (Poggio et al., 2000). Correlation between the scintillation signals of the BLS and MWS will likely show greater varia- tion too, depending on how the decay of eddies affects each beam, which would result in variability in C n1n2 that prop-
The principle of using sequentially-addressed arrays of parallel microelectrodes to electrostatically propel microparticles, originally described by Masuda et al [1, 2], has now been developed into a technique for characterizing and separating cells and micro-organisms [3–6]. Travelling electric fields can also be used as electrohydrodynamic pumps capable of moving liquids . In initial experiments low frequency travelling electric fields were observed to induce motion in blood cells via largely electrophoretic interactions between the cell and the electrodes . Later work, using fields of much higher frequency, showed that an asynchronous motive force could be induced in latex beads . This force arises from a displacement between the induced dipole moment of the particle and the applied travelling electric field, in a linear equivalent to classical electrorotation . In a study by Huang et al in 1993  this linear force was termed travelling-wave dielectrophoresis (TWD).
All the analyses discussed above confirm the growth in suburban development, espe- cially in the direct vicinity of the city, and the resulting progressing homogenisation of landscape. Similar tendencies have been ob- served by other researchers, e.g. in the studies on the changes in landscape nature resulting from distance from main roads and the city centre of Warsaw (Roo-Zielińska et al., 2013), or in the research on the suburbanisation of the outskirts of Lublin (Przesmycka 2012) and Poznań (Szczepańska, Wilkaniec 2014). The two latter quoted works did not employ landscape indicators to describe the phenom- enon of spreading of large cities. However, the authors used direct observations to draw the conclusion that the areas designated for residential development are increasing and that the spatial layout of suburban villages is subject to progressing transformations. Foryś and Putek-Szeląg (2014) demonstrated that the analysed area belongs to those regions of Poland, where urban agglomerations play a major role in the transformation of the real estate market. This confirms that it is justified to analyse the topic of monitoring the transfor- mations of the functional and spatial structure of the suburban zone of Wrocław.
Abstract: The thermal energy can be collected whenever it is available and be used whenever needed with an effective application of Heat Energy Storage. Heat energy storage can be used with any heating system like electric heating, waste heat or solar system. Because of intermittent source of solar energy heat energy storage system is mostly recommended and influencing the sector. The experiment was performed at Kshitija stone crushers Pvt.Ltd., Wadgaon, Tal.-Murtizapur, Dist-Akola (M.S.), India. On thermal energy storage using Concrete block as the solid media sensibleheat storage material because it is locally available at low cost and high heat storage capacity. Cooling water at high temperature flowing to radiator from Diesel engine of power generation unit is used as heat transfer fluid (HTF) in order to reuse waste heat energy. The concrete storage prototype is composed of concrete heat storage block with embedded pipe. The embedded pipe was used for transporting and distributing the heat transfer medium while sustaining the pressure. The experiment was carried in two stages i.e. charging and discharging, where energy stored while charging and retrieved while discharging. The concrete block stores the thermal energy as sensibleheat. Thermal performance of Thermal Energy Storage (TES) such as charging and discharging time, radial thermal distribution, energy efficiency has been evaluated. For the charging and discharging experiment it was found that the increase or decrease in rate of energy storage and retrieval depends on the temperature and mass flow rates of HTF. The results shows that increasing the HTF flow rate increases the overall heat transfer coefficient, thereby enabling faster exchange of heat and reduces charging and discharging time. It can be conclude from this experiment that thermal energy storage can also play useful role in waste heat management syste m.
In practice, you need data as part of the running of your institution and as part of the manage- ment of those data [data to manage data]. Therefore, databases are operational [the store data that’s collected, maintained, and modified (dynamic data), such as sales transactions, circulation services, cataloguing, etc. Other types of databases are analytical. These track historic (static) data, that are often used in transaction analyses and creating other statistical reports. For example, to review circulation trends or web use, researchers and practitioners use access logs from web servers to extract data about queries, what databases and other servers were used, etc.
one as the incidence EM can penetrate into it and excites interior resonance. The interior multipath reﬂections and surface creeping wave might add constructively to the specular point at the sphere’s front surface. Considering a dielectric sphere of 1 meter in diameter and permittivity 5 in Fig 6.9. The interrogating Gaussian pulse plane wave signal is shown in Fig 6.10 and its impulse response is shown in Fig 6.11. The determining of the start point of the late time should not be the same as a conducting sphere of the same size. The estimation of the 𝑇 𝐿 is done by calculating the time when forced response vanishes. When the interrogating pulse penetrates into the dielectric sphere, it propagates inside it and hits the other side of the interior wall it is partly reﬂected back, then penetrates the dielectric sphere again into free space and contributes to the backscattering. There could be multi- ple backward reﬂections, here we consider the ﬁrst one as it is the major one. As this is the forced response, we must exclude it from the late time response. The beginning of late time response starts just after this forced response vanishes. The transit time 𝑇𝑡𝑟 in equation 6.2 should have a time factor √ 𝜀 𝑟 as the incidence EM travels √ 𝜀 𝑟 times slower in the dielectric sphere than it would in free space. 𝑇𝐿 is estimated to be 61.147 ns in this particular case, in which pulse width 𝑇 𝑝 is 4.5 ns. The late time response is shown in Fig 6.12.
desired output frequency but wiU be cut-off at the second harmonic. Both of the above circuits will normally have a backshort behind the diode. In the first case, the backshort will have an effect on the impedance at the output frequency only. In the second case, the backshort will affect both the impedance at the second harmonic and the impedance at the output frequency, and as a result it may not be possible to optimise both circuits simultaneously. Archer  has solved this problem by tuning the idler and output circuits with quasi-optical components. The output waveguide is made large enough to pass the second harmonic, via a feedhom and collimating lens, onto a dichroic plate. The dicliroic plate acts as a high pass filter, and comprises a flat aluminium sheet drilled through with an array of circular holes. The circular holes are regarded as short lengths of circular waveguide whose diameter is small enough to be cut-off at the second harmonic. The transmission curve of the dichroic plate is a function of the thickness of the plate, the diameter of the holes, and the spacing between the holes. As the plate looks like a plane mirror at the second harmonic, moving the plate axially with respect to the plane of the diode tunes the idler frequency termination. With the multiplier working as a tripler, the third harmonic is tuned with a pair of fused quartz plates. The reactive part of the third harmonic impedance is tuned by moving these plates relative to the diode, and the transmission peak depends on their spacing - similar to a Fabry-Perot resonator. A second dichroic plate which reflects the third harmonic but passes the fourth was placed behind the first dichroic plate so that the system worked as a quadrupler. Quasi-optical circuit terminations of this type have virtually no resistive loss and are relatively broad-band, and thus have considerable advantages over waveguide or stripline circuits at sub-millimeter wavelengths.
In contrast to satellite retrievals, ERA presumes a static sea surface, which may lead to a systematic mis- representation of ~ u in regions of strong, confined sur- face ocean currents, which propagate along the prevail- ing wind direction. The latter has been investigated by Meissner et al. (2001) among a 10-year intercomparison study of SSM/I and reanalyses wind speed retrievals from 1987-1997 58 . Results indicate that both reanalyses prod- ucts significantly overestimate ~ u with respect to SSM/I output (up to 3 m/s) along upwelling regions off Mau- ritania and Namibia (compare Plate 1 / Plate 2 therein), where both oceanic currents (Canary Current to the north, Benguela Current to the south) and prevailing winds prop- agate equatorwards along the coastline (depending on sea- son). This mechanism may account for the negative bi- ases within the Eastern Subtropical Atlantic illustrated in Fig. 5.2. Fig. 5.3 exemplarily illustrates the current-wind- relationship referring to buoy- and scatterometer measure- ments, which may be replaced by Oceanet (or ERA) and HOAPS, respectively.
3.5. Driving Forces Behind Fluxes
[ 41 ] Figure 10 shows plots of EC and scintillometer LE
and H values against the u(e s 2 e a ) and u(T s 2 T a ) func- tions, which are known to have a signiﬁcant impact on tur- bulent exchange. Note that e s was derived using net radiometer measurements of T s which along with measure- ments of u, T a , and e a were taken from the ﬂoating weather station. Both EC and scintillometer data showed strong relationships with the u(e s 2 e a ) and u(T s 2 T a ) functions. Figure 10a shows that there were occasional outliers where positive EC H values were measured when u(T s 2 T a ) was negative. These outliers tended to occur shortly after a tran- sition from T s > T a to T s < T a , suggesting that they have been the result of EC footprint contamination occurring when the internal boundary layer height was below the height of the instrumentation. Normally, 1–2 h after the transition from unstable to stable conditions the relation- ship between H and u(T s 2 T a ) returned to normal.
To determine the possible future expected changes in Ta, we calculated the annual trend from a 20 years monthly mean time series (1998-2018) and for the near (2019-2039) and far future (2079- 2099). To calculate trends, we applied the non-parametric Mann-Kendall test (Mann, 1945; Kendall, 1975) and to quantify the rates of change the Sen's slope estimator (Sen, 1968), using the "trend" package for R (R package version 1.1.1., Pohlert, 2018). We obtained the 1998-2018 time series from two weather stations, one located in the Bahía Blanca station (38.44°S, 62.1°W), to describe the SG (National Weather Service (SMN, https://www.smn.gob.ar) and one in Hilario Ascasubi (39.38°S, 62.62°W), adjacent to La Salada (National Institute of Agricultural Technology, INTA, https://www.argentina.gob.ar/inta). We obtained the short- and long-term future series with a climate model database from the Centro de Investigaciones del Mar y la Atmosfera (CIMA) (http://3cn.cima.fcen.uba.ar). We used the annual mean temperature of the Community Climate System Model 4 (CCSM4) of the National Centre for Atmospheric Research (NCAR, USA) as this resulted in the best from 24 climate models from CMIP5 (Coupled Model Intercomparison Project 5) and CLARIS- LPB (A Europe-South America Network for Climate Change Issues and Impacts Assessment – La Plata Basin Project), assessed for the present time in the region after a quantile-mapping bias correction using observations from the Climate Research Unit (CRU). We used two Representative Concentration Pathways (RCP) for the emission of greenhouse gases (GHG): RCP 4.5 that represents a mitigation scenario, which stabilizes radiative forcing at 4. 5 W m -2 and RCP 8.5 that represents a rising scenario
the now widely adopted IEEE 802.11 wireless Local Area Network (LAN) standard presently supports 2 Mb/s bandwidths at 2.4 GHz . Furthermore, fixed link broadband access in the Multichannel Multipoint Distribution System (MMDS) band at 2.5 GHz can support data rates o f tens o f megabits per second. A common feature for these standards are that they are using carriers o f a few gigahertz and that only text, audio, and possibly compressed video services can be supported. If services with higher data rates are to be provided, such as listed in Table 1.1, microwaves simply does not have sufficient bandwidth for these applications to be supported. In order to support these services, millimetre-wave radio has to be used. Furthermore, even though microwaves augmented with multilevel modulation formats are currently giving adequate band width for most common applications, there is little margin to accomodate higher data rate systems. An investment in millimetre-wave radio will therefore result in a more future-proof system. An additional powerful argument for millimetre-wave radio is cost considerations. In year 2000, the third generation UMTS spectrum was auctioned in Britain. 22 billion pounds was paid for a total o f 155 MHz bandwidth. As a contrast, the auction o f spectrum for millimetre-wave broadband wireless in the 28 GHz band grossed 38.2 million pounds for a total o f 672 MHz bandwidth . Bandwidth is simply much less expensive at millimetre-wave frequencies.
The basic premise behind fire algorithms used for remote sensing is the definition of the background temperature of a target pixel. Having an accurate measure of this temperature is vital in order to be able to classify a target pixel as containing a fire in the first place, along with being able to accurately estimate the area of the pixel containing fire and the intensity or radiative output of the fire . Background temperature tends to be a difficult value to determine accurately because of the obscuring effects of the fire’s output, which outweighs the background signal from a pixel in the medium waveinfrared. Early efforts to correct for this behaviour used a bi-spectral approach , which used the response of thermal infrared bands in the same area to develop an estimate of fire characteristics. Thermal infrared bands also display sensitivity to fire outputs but to a much lesser extent, and are generally used for false alarm detection especially for marginal detections from the medium waveinfrared caused by solar reflection. The difference between signal response in these two bands is the basis for most current geostationary fire detection algorithms, and similarly with the analysis of LEO sensor data. Issues with these algorithms start when looking at fires of smaller extents. A study by Giglio and Kendall  highlighted issues with fire retrievals using the bi-spectral method especially with regard to smaller fires and background temperature characterisation. The study found that misattribution of the background temperature by as little as 1K for fires that covered a portion of a pixel (p ≤ 0.0001) could produce errors in fire area attribution by a factor of 100 or more, with a less significant error in temperature retrieval of > ± 200K. This is of major concern for the use of geostationary sensors for detection, as fires in their early stages make up far less a portion of a pixel from a geostationary sensor than is the case with a LEO sensor.
via a mixture argument [Big04]. For the integrated fluxes we set W (V,r) (0) = 0 almost surely; we shall therefore
always implicitly assume that any large-deviation rate blows up unless w(0) = 0 .
Strategy and overview. Section 2 describes the setting of the paper: the topology used for the dynamic large deviations, the precise assumptions on the propensities, reaction rates and initial condition. We then discuss existence and convergence of the path measures, which serves as a prerequisite for the large-deviations. Section 3 is dedicated to the analysis of the rate functional. Most importantly, it is shown that the rate functional has an alternative formulation as a convex dual, and that the rate functional can be approximated by curves that are sufficiently regular to be able to perform a change-of-measure. In a sense, these approximation lemmas are the core of the large-deviation proof. We shall see that the fact that the rate functional has a relatively simple formulation makes these proofs rather direct (which would be much more cumbersome when proving the large deviations of the concentrations only). Finally, Section 4 is devoted to the proof of the large-deviation principle, Theorem 1.1. It will be shown that one can always construct sufficiently steep compact cones on which the path measures place all but exponentially vanishing probability. We then show the lower bound of the measures with the random initial conditions via a double tilting argument, exploiting the approximation lemmas. After this, the upper bound is proven under deterministic initial conditions, which implies the large-deviations upper bound by a mixture argument.
Compared to traditional cellular networks, small cells in HetNets have smaller coverage, but higher capacity and less interference, all of which make mm-wave techniques naturally applicable in such scenarios. Typically, there are two options of system architecture for 5G backhaul. One is a centric solution, and the other a distributed solution [ 4 ]. In a centric solution, a macrocell base station (BS) is situated in the center, with small cell BSs (SBSs) uniformly distributed around. Direct links between SBSs are not allowed. Instead, they are required to get access to the core network through the center macrocell BS, which is connected to the gateway by fibre links. In a distributed solution, all backhaul data are relayed to a single specific wired SBS instead of the macrocell BS. Backhaul data are allowed to transmit between the established mm-wave links of adjacent SBSs and finally are collected by the designated wired SBS to the core network through fibre links. System-level simulations have shown that the distributed solution achieves higher energy efficiency and throughput gain, mainly due to sharing cooperative traffic among multiple wireless SBSs [ 4 ].
Abstract: The successful realisation of high performance millimetre and sub-millimetrewave radars requires key enabling technologies, many of which are not yet commercially available. This paper illustrates some of the key enabling technologies developed to address radar system requirements including chirp generation, feedhorns, duplexing and non-mechanical beam steering. The type of high performance radar system which can be achieved using these technologies is illustrated with the examples of the ‘T-220’ 94 GHz FMCW Doppler radar used for high sensitivity target and clutter phenomenology studies and the ‘CONSORTIS’ 340 GHz 3D imaging radar developed for concealed object detection as required for next generation aviation security screening.