The assimilation of observations should be an integral part of model development. Various approaches to incorporate constraining data exist, such as variational approaches mini- mizing a cost function using the adjoint of the model (Kato et al., 2013; Kaminski et al., 2013) or the use of ensemble Kalman filters (Lorenc, 2003; Gerber and Joos, 2013; Stöckli et al., 2011; Ma et al., 2017). A drawback of these meth- ods is that the sampling process is dependent on the choice of the cost function, the design of which is not trivial when assimilating multiple observations simultaneously. Other ap- proaches have also been investigated, such as using gener- alized likelihood function for model calibration and uncer- tainty estimation (Beven and Binley, 1992). Here we em- ploy the Latin hypercube sampling (LHS) (McKay et al., 1979) approach, as used successfully in previous studies (Steinacher et al., 2013; Battaglia et al., 2016; Steinacher and Joos, 2016; Battaglia and Joos, 2018; Zaehle et al., 2005). It allows simultaneous stratified sampling of a range of pa- rameters, given an appropriate prior parameter distribution, while offering the opportunity to change evaluation metrics a posteriori, thus enabling a sensible incorporation of multiple observational constraints. By improving the prior distribu- tion iteratively it is possible to reasonably capture observa- tions while considering trade-offs between the different tar- gets. Additionally, this approach not only yields a best-guess of parameter values but also contains information about the associated uncertainties. A drawback of this technique is that it is not possible to increase the size of the ensemble after the initial sampling and, if the range of the prior distribution is too large, the algorithm has decreased computational effi- ciency.
this site regular observations of canopy height, leaf area in- dex (LAI) and FLUXNET gross primary productivity (GPP) are available.
Dataassimilation has previously been implemented with the JULES land surface model with Ghent et al. (2010) us- ing an ensemble Kalman filter to assimilate satellite observa- tions of land surface temperature, Raoult et al. (2016) con- ducting experiments with four-dimensional variational dataassimilation focusing on the carbon cycle and Pinnington et al. (2018) assimilating satellite observations of soil mois- ture over Ghana. Of these studies Raoult et al. (2016) and Pinnington et al. (2018) are directly related to the technique presented here in that they used variational DA techniques to estimate parameters in JULES. Raoult et al. (2016) use an adjoint of JULES (ADJULES) in their study to estimate carbon-cycle-relevant parameters for different plant func- tional types. However, the adjoint is only currently available for JULES version 2.2, and considerable effort would be re- quired to update it to the most recent model version (5.3 as of 1 January 2019). Pinnington et al. (2018) used a more recent version of JULES (4.9) but avoided the need for an adjoint by using a Nelder–Mead simplex algorithm to perform the cost function minimization. This inevitably requires a greater number of model integration steps than using a derivative- based technique and is unlikely to work effectively for large dimensional problems.
BEPS is a process-based ecosystem model mainly devel- oped to simulate forest ecosystem carbon budgets (Chen et al., 1999; Ju et al., 2006; Liu et al., 1999). For many reasons, including the complexity of ecosystem processes, spatial– temporal variabilities, and representative errors, parameters in process-based models often do not represent their true val- ues when these models are used to calculate carbon budgets over large areas or for long time periods (Mo et al., 2008). Errors in these parameters lead to biases in model results (other uncertainties, such as lack of knowledge on histori- cal landuse change and land management, also have influ- ence on model results). In this study, we try to reduce biases in the BEPS-simulated carbon fluxes by incorporating atmo- spheric CO 2 concentration measurements with data assimi-
optimized model with the final set of parameters does not de- grade the fit to MODIS-NDVI and FLUXNET data that were assimilated in the first two steps (only minor changes of the RMSEs occur; see Fig. 8). This result has two important con- sequences. Most importantly it suggests that current state of the art LSMs (at least ORCHIDEE) have reached a level of development where consistent assimilation of multiple data streams is finally possible. This overcomes the most impor- tant limitation noted by Rayner (2010) to the widespread use of CCDAS systems. At a more technical level it suggests that stepwise assimilation is a valid and feasible approach. Al- though we only carried the estimated parameter uncertainties from one step to the next (as a first more simple approach), and not the full error variance-covariance matrix, we were able to propagate enough information to maintain an optimal model–data fit after the last step for the three data streams, as confirmed with the back-compatibility checks. MacBean et al. (2016a) provided a more specific analysis of this issue. However, not propagating the covariance terms may have a larger impact for the reduction of the inferred parameter un- certainties (see for instance the large parameter/flux error re- duction in Fig. 9/Fig. 11). The order of the different steps was dictated by the number of parameters we choose to expose to each data stream, from only a few phenology parameters for NDVI up to the largest set for atmospheric CO 2 . Recall
WRF-NMM was interfaced with MLEF, a hybrid ensemble- variational dataassimilation method developed at Colorado State University. The solution of the analysis maximizes the likelihood of the posterior probability distribution, ob- tained by a minimization of a cost function that includes a general nonlinear observation operator. As in typical varia- tional and ensembledataassimilation methods, a cost func- tion is derived using a Gaussian probability density func- tion framework. Like other ensembledataassimilation al- gorithms, MLEF produces an estimate of the analysis un- certainty (e.g., analysis error covariance). In addition to the common use of ensembles in calculations of the forecast er- ror covariance, the ensembles in MLEF are exploited to effi- ciently calculate the Hessian preconditioning and the gradi- ent of a cost function. The MLEF method is well suited for use with highly nonlinear observation operators, for a small additional computational cost of the minimization procedure. Relevant prognostic WRF-NMM variables were selected as control variables, as they can significantly impact the initial conditions, which can, in turn, influence the forecast. This selection includes the following variables: temperature (T ), specific humidity (q), hydrostatic pressure depth (PD), the U and V components of the wind, and cloud water mass (CWM – total cloud condensate in WRF-NMM), which combines all cloud hydrometeors into a total sum. The goal is to minimize the following cost function:
Advancements in both land surface models (LSM) and land surface dataassimilation, especially over the last decade, have substantially advanced the ability of landdataassimilation systems (LDAS) to estimate evapotranspiration (ET). This article provides a historical perspective on international LSM intercomparison efforts and the development of LDAS systems, both of which have improved LSM ET skill. In addition, an assessment of ET estimates for current LDAS systems is provided along with current research that demonstrates improvement in LSM ET estimates due to assimilating satellite-based soil moisture products. Using the Ensemble Kalman Filter in the Land Information System, we assimilate both NASA and Land Parameter Retrieval Model (LPRM) soil moisture products into the Noah LSM Version 3.2 with the North American LDAS phase 2 (NLDAS-2) forcing to mimic the NLDAS-2 configuration. Through comparisons with two global reference ET products, one based on interpolated flux tower data and one from a new satellite ET algorithm, over the NLDAS2 domain, we demonstrate improvement in ET estimates only when assimilating the LPRM soil moisture product.
This process first determines the original GTAP sectors which biofuels are embedded. Then, data were obtained on monetary values of biofuels produced by country; a proper cost structure for each biofuel pathway; users of biofuels; and feedstock for each biofuel. Finally, it uses these data items and a set of programs to introduce biofuels into the database. As an example, in the standard GTAP database, the US corn ethanol is imbedded in the food sector. There- fore, this sector was divided into food and ethanol sectors. To accomplish this task, we needed to evaluate monetary values of corn ethanol and its by-product (DDGS) pro- duced in the US at 2011 prices. We also needed to deter- mine the cost structure of this industry in the US in 2011, as well. This cost structure should represent the shares of various inputs (including intermediate inputs and pri- mary factors of production) used by the ethanol industry in its total costs in 2011. For the case of US corn ethanol, which represents a well-established industry in 2011, these data items should match with national level information. Hence, as mentioned in the previous section, we collected data from trusted sources to prepare required data for all types of the first generation of biofuels produced across the world in 2011. For the second generation of biofuels (e.g., ethanol produced from switchgrass or miscanthus) which are not produced at commercial level, we rely on the literature to determine their production costs and also their cost structures. For these biofuels, we also need to follow the literature to define new sectors (e.g., miscanthus or switchgrass) and their cost structures to include their feedstock at 2011 prices.
other approach, as described above, Lagrangian data as- similation, appends equations for advection of an in- strument’s coordinates to the model equations for the velocity field. LaDA was developed and applied success- fully in several theoretical and methodological studies over the last decade ( Ide et al. 2002 ; Kuznetsov et al. 2003 ; Salman et al. 2006 ; Spiller et al. 2008 ; Vernieres et al. 2011 ). In some ways there is a trade-off between an ob- servation operator that is not local in time and could be nonlinear, which is the case with interpolation, versus the strongly nonlinear dynamics of modeled advected paths where the paths are observed directly. The latter approach demands an assimilation strategy that can deal with strong nonlinearities. This further motivates the primary idea behind the proposed hybrid assimilation scheme: use a particle filter in low-dimensional, highly nonlinear in- strument coordinate variables and an ensemble Kalman filter in the high-dimensional flow variables. We describe the basics of each filtering method below.
In this paper, apart from the water cut data, we consi- der coarse-scale measured data as well. The coarse-scale data is assumed to be permeability at some specified lev- el of precision. The unknown variables: permeability, at the fine-scale, are estimated using a modification to the EnKF algorithm, linking the data at different scales via upscaling. It is important to resolve fine-scale heteroge- neity for various purposes such as, enhanced oil recove- ry, environmental remediation, etc. The main idea behind upscaling is to obtain an effective coarse-scale permea- bility which yields the same average response as that of the underlying fine-scale field, locally. Single phase flow upscaling procedures for two phase flow problem have been discussed by many authors; see e.g., [10,11] and also Section 3.1. We will refer to our proposed variant of EnKF as coarse-scale EnKF. Assimilation using dynam- ic data, such as fractional flow data only, is therefore referred to as regular EnKF. The coarse-scale permeabil- ity data could be obtained either from geologic consider- ation or coarse-scale inversion of dynamic, fractional flow data on a coarse grid as considered in [8,12]. This coarse-scale, static data can be viewed as a constraint, which is to be satisfied up to the prescribed variance for obtaining the fine-scale estimates in every data assimila- tion cycle. Upscaling methods relate the solution at the finescale to the coarse-scale, therefore in the Kalman filter context, it amounts to modeling a nonlinear obser- vation operator. In our coarse-scale EnKF approach, we use the measured data in batches, such that the estimate with one data becomes a prior while assimilating the other observation (see Section 3 for further details). Though in this paper we used coarse-scale data at only one scale, our approach can be easily generalized to as- similate data at multiple scales by appropriately model- ing the linkage between different scales. Also, our ups- caling method is independent of the underlying fine- scale field.
5.3. Policy implications and model application
Based on the above observation, PES could contribute to sus- tainable development policy design before markets appear or are created that place value on environmental considerations ( Stenger et al., 2009 ). The social welfare value of policy intervention is reduced if households fail to adopt practices that generate greater bene ﬁts than costs. PES could be evaluated based on the type of payments provided by the schemes ( Engel et al., 2008 ). In the simulation a price premium was assigned to PES adopters that met the biodiversity criteria. However, the level of adoption was low. There are three possible reasons for the observed low adoption levels: (1) the yields from rubber agroforest are already low, (2) the income generated by the price premiums was lower than the other land uses, and (3) the proposed biodiversity performance indicators are hard to meet. The ﬁrst two of these reasons are addressed in this paper, that due to low yields from rubber agroforest eco-certi ﬁcation schemes for conserving rubber agroforest be evaluated. To address the third reason there is a need to establish biodiversity performance indicators that are both meaningful and that are understood and achievable by participating communities. According to Jack et al. (2008) , the overall viability of PES schemes is determined by the preferences of all relevant stakeholders. Thus, proposals for scaling up PES schemes should consider the factors affecting the decision to adopt or participate in PES schemes (e.g., price premium). With the current interest in Reducing Emissions from Deforestation and Degradation plus (REDD þ) schemes to achieve emission reduction along with biodiversity conservation ( Minang and van Noordwijk, 2012 ) and high carbon-stock development pathways ( Minang et al., 2012 ), the predicted effectiveness of an eco-certi ﬁcation scheme for biodiversity-friendly rubber production is noteworthy.
data stream assimilation framework (e.g. Richardson et al., 2010; Keenan et al., 2012; Kaminski et al., 2012; Forkel et al., 2014; Bacour et al., 2015). However, whilst the po- tential benefit of adding in extra data streams to constrain the C cycle of LSMs is clear, multiple data stream assimi- lation is not as simple as it may seem. This is particularly true when considering a regional-to-global-scale, multiple- site optimisation of a complex LSM that contains many pa- rameters, and which typically takes on the order of minutes to an hour to run a one-year simulation. When using more than one data stream there is the option to include all data streams together in the same optimisation (simultaneous approach) or to take a sequential (step-wise) approach. Mathematically, the optimal approach is the simultaneous, but complications may arise due computational constraints related to the inver- sion of large matrices or the requirement of numerous sim- ulations, particularly for global datasets (e.g. Peylin et al., 2016) and/or due to the “weight” of different data streams in the optimisation (e.g. Wutzler and Carvalhais, 2014). On the other hand, in a step-wise assimilation the parameter error covariance matrix has to be propagated at each step, which implies that it can be computed. If the parameter error co- variance matrix can be properly estimated and is propagated between each step, the step-wise approach should be math- ematically equal to simultaneous. However, many inversion algorithms (e.g. derivative-based methods that use the gradi- ent of the cost function to find its minimum) require assump- tions of model (quasi-)linearity and Gaussian parameter and observation error distributions (Tarantola, 1987, p. 195). If these assumptions are violated, or the error distributions are poorly defined, it is likely that the step-wise approach will not be equal to the simultaneous approach, because informa- tion will be lost at each step due to an incorrect calculation of the posterior error covariance matrix at the end of each step. An incorrect description of the observation(–model) er- ror distribution could result from (i) the wrong assumption about the distribution of the residuals between the observa- tion and the model, (ii) a poor characterisation of the error correlations, (iii) an incompatibility between the model and the data (possibly due to a model structural issue or differ- ences in how a variable is characterised), or (iv) a bias in the observations that is not unaccounted for (i.e. is treated as a random error). As mentioned, whilst a simultaneous opti- misation is mathematically more rigorous in the sense that the error correlations are treated within the same inversion, if the prior distributions are not properly characterised any bias may be aliased to the wrong parameters (Wutzler and Carvalhais, 2014), and possibly more so than in a step-wise approach.
The Mercator Ocean 0.25 ◦ operational global ocean anal- ysis and forecasting system (Lellouche et al., 2013) (here- after referred to as PSY3) was used for this study. The PSY3 system has been operational since 2005. It routinely assimi- lates SLA, satellite SST and in situ data. The model is Ver- sion 3.1 of NEMO (Nucleus for European Modelling of the Ocean; Madec and the NEMO team, 2008) with a 0.25 ◦ tri-polar ORCA grid. The horizontal resolution is 27 km at the Equator and decreases to 6 km toward the poles. A to- tal of 50 vertical levels are used and discretization decreases from 1 m resolution at the surface down to 450 m at the bot- tom, with 22 levels within the upper 100 m. The NEMO sys- tem uses the OPA (Océan PArallélisé) ocean model coupled with the LIM2 ice model (Fichefet and Morales Maqueda, 1997). Three-hour atmospheric fields of the European Cen- tre of Medium Weather Forecasting (ECMWF) are used to force the ocean and ice models. Momentum, heat and fresh- water fluxes are computed from CORE (Coordinated Ocean- Ice Reference Experiment) bulk formulae (Large and Yeager, 2009). The dataassimilation scheme was developed at Mer- cator Ocean (Tranchant at al., 2008; Lellouche et al., 2013). It is based on a reduced-order (singular evolutive extended) Kalman filter (SEEK) (Pham et al., 1998) with localized 3-D multivariate modal decomposition of the forecast error and a 7-day assimilation window. The background error covari- ance matrix is static over the assimilation window of 7 days and varies “climatologically” over the year. It is estimated from an ensemble of anomalies computed from a multiyear, forced simulation. The time of the analysis corresponds to the middle of the assimilation window, the fourth day. The increment on temperature and salinity, computed during the analysis step from the in situ profile innovations, is projected onto the barotropic height and U, V fields thanks to the mul- tivariate properties of the model covariance matrix. A 3-D- variational scheme provides a correction of the slowly evolv- ing large-scale biases in temperature and salinity below the mixed layer. There is no in situ climatology or reference field relaxation in the system. The calculated increments are added to the model solution progressively in time over the assimi- lation window using an incremental analysis update (IAU) method (Bloom at al., 1996) to avoid spin-up effects at the beginning of each assimilation cycle.
TopNet uses seven calibrated parameters for each sub- catchment. To reduce the dimensionality of the parameter estimation problem, initial values for the parameters were estimated from the sources described. The spatial distribu- tion of the parameters was then preserved, and the values were adjusted uniformly using a spatially constant set of pa- rameter multipliers. Calibration used precipitation and cli- mate data from Tait et al. (2006) who interpolated data from over 500 climate stations in New Zealand across a regular 0.05 ◦ latitude/longitude grid (approximately 5 km × 5 km). The precipitation data were bias-corrected using a water bal- ance approach (Woods et al., 2006). These data are pro- vided at daily time steps, and are disaggregated to hourly data before use in the model. The parameter values used here are those in current use for the operational forecasting sys- tem. The calibration methods varied by catchment accord- ing to the responsible hydrologist, and consisted of a semi- automatic method using either Monte Carlo simulation (two catchments), or the ROPE (RObust Parameter Estimation) calibration method (Bardossy and Singh, 2008) (five catch- ments) to obtain a small ensemble of possible parameter sets. This was followed by review by a hydrologist to determine a single preferred set based on visual inspection of the model simulation results. Note that parameter values are not per- turbed during the assimilation. Instead, we allow for model error by perturbing the state variables.
One difficulty in quantifying observation-error correlations is that they can only be estimated in a statistical sense, not calculated directly. The method proposed by Desroziers et al. (2005) has become popular for estimating observation-error statistics due to its simplicity (a detailed discussion of this diagnostic is given in section 3). The diagnostic provides an estimate of the observation-error covariance matrix using the statistical average of observation-minus-background and observation-minus-analysis residuals, assuming that the analysis is calculated using least-variance linear statistical estimation. It has also been shown to be applicable to scenarios where the analysis is calculated using both 3D and 4D variational assimilation methods (Desroziers et al., 2005; Stewart, 2010). However, the diagnostic only provides a correct estimate of the observation-error covariance matrix if the assumed background- and observation-error statistics used in the assimilation are correct. As well as the impact of the assumed error statistics, the diagnostic has further limitations, such as the error introduced when using nonlinear observation operators (Terasaki and Miyoshi, 2014) and the fact that an ergodic assumption is often made in order to obtain sufficient sample residuals (Todling, 2015). However, Desroziers et al. (2005) also show that the result may be improved if successive iterations of the diagnostic are applied. Furthermore, with careful interpretation of the results, the diagnostic can still provide useful information about the true observation-error statistics when the assumed statistics, used in the assimilation, are not exact (M´enard, 2016; Waller et al., 2016b). Despite these limitations, the diagnostic has been used successfully in some studies to estimate observation-error variances and correlations. It has been used in simple model experiments (Li et al., 2009; Stewart, 2010; Miyoshi et al., 2013) and to estimate time-varying observation errors (Waller et al., 2014a). The diagnostic has also been applied to operational NWP observations to calculate interchannel error covariances (Bormann and Bauer, 2010; Bormann et al., 2010, 2016; Stewart et al., 2014; Weston et al., 2014; Waller et al., 2016a) and spatial error covariances (Cordoba et al., 2016; Waller et al., 2016a, 2016c) in variational assimilation systems.
and time intervals, although large increments, e.g. due to cor- rected precipitation during a storm, can have an effect on the start and end points of an observed phenomenon, such as a drought. When classifying such an event defined at a specific quantile level, there will be a 2-fold impact from the assim- ilation: the changes in the quantile of interest as well as the change in soil moisture itself. Here, we highlight a sample dry event on the east coast to show to what extent its classi- fication changes through the assimilation impact. Figure 13 shows root-zone soil moisture at or below the 10 % quantile level for the open-loop run as well as the dataassimilation experiment DA 2 for soil moisture conditions in early 2010, thus at the beginning of the assimilation period. Due to the higher 10 % quantile for DA 2, as seen in Figs. 11 and 12, the spatial extent of the cluster for DA 2 is reduced, but the spatial patterns of soil moisture remain largely the same. At some time periods, not shown here, a higher degree of noise is noticeable within the assimilation dataset. This is likely due to the fact that non-spatially correlated noise was applied to the meteorological forcings, resulting in a heterogeneous background error field for grid points. We thus conclude that despite having carried out the assimilation in 1-D, spatially correlated noise is recommended for such applications. An alternative would be to further increase the ensemble size but at the expense of higher computational resources. Addition- ally, when trying to extract meaningful statistics on the oc- currence of events, such as droughts, it might be particularly important to clean up the dataset in the case of data assim- ilation using simple filter algorithms, such as those applied by Herrera-Estrada et al. (2017). We want to highlight the fact that the shown event is for demonstration purposes and not linked to any major drought event, which would require a more in-depth analysis and references to independent data sources.
Soil moisture is a key variable in short- and medium-range meteorological modelling as well as in climate and hydro- logical studies. Over vegetated areas, soil moisture can con- trol plant transpiration. Continuous land surface processes such as the evolution of soil moisture, plant transpiration and soil evaporation can be modelled with Land Surface Mod- els (LSM). Global Surface Soil Moisture (SSM) products are now operationally available from microwave spaceborne in- struments such as ASCAT (Advanced Scatterometer onboard METOP, Wagner et al., 2007), the Advanced Microwave Scanning Radiometer for EOS (AMSR-E) sensor (Njoku et al., 2003, for the official product and Owe et al., 2001, 2008, for a new retrieval product), or will be available from the recently launched SMOS (Soil Moisture and Ocean Salin- ity, ESA/CNES, Kerr et al., 2001, 2007) satellite dedicated to the observation of the microwave brightness temperature (TB) at L-band, and from SMAP (Soil Moisture Active Pas- sive, Entekhabi et al., 2004) which is scheduled for launch in 2015. While microwave remote sensing provides global maps of SSM (Schmugge, 1983), combining this informa- tion with LSM simulations through a LandData Assimila- tion System (LDAS) allows the root zone soil moisture (w 2 )
on a set of German stations (Fig. 2 and Table 2). We plot- ted in Fig. 2 the daily ozone profile obtained for the sum- mer of 2009 averaged over each type of observation. Com- pared to the yearly average, the summertime suburban pro- file shows a higher ozone concentration in the daytime while keeping the lowest nighttime ozone level, which is in accor- dance with the variability criterion. Thus, differences in aver- age ozone concentrations between station types are more im- portant in the early morning and are weakest at 15:00 UTC, when all observation types reach similar daily maxima. Ob- servations associated with the lowest daily variability are mostly representative of a remote environment instead of a mountainous one because we reject stations located above 800 m height. Remote stations located between 300 m a.s.l. and 800 m a.s.l. exhibit a higher baseline in ozone concentra- tions probably associated with the tropospheric ozone verti- cal gradient (Chevalier et al., 2007). These stations are there- fore discriminated from the remaining remote ones. The lat- ter are shared between remote continental and coastal sta- tions under the influence of generally rather clean marine air masses such as the Mace Head station in Ireland. Finally, the use of the P50DV statistics can be extended to the whole of Europe with some exceptions; for instance, we can notice that remote stations with low ozone variability that are lo- cated in Scandinavia do not have the highest daily mean, as usually observed for remote stations. Furthermore, we note that the geographical distribution of the stations is coherent with their attributed environment (Fig. 1); for example, sub- urban stations (red circles) are often located in high emission regions close to urban areas such as Paris, Berlin or in the Po Valley. Also, as seen from Table 2, results obtained with the FLEM05 approach are often consistent with the Airbase clas- sification: around 90 % of the remote stations (FLEM05) are rural (Airbase) and more than 80 % of the suburban stations (FLEM05) are also “suburban” or “urban” in Airbase. How- ever, discrepancies are found in the ozone categorization with respect to the standard classification. For instance, the effect of the urban environment on ozone concentrations for many “urban” Airbase sites is small enough so that these sites are still representative of a larger environment: these sites con- stitute around 40 % of our (FLEM05) total rural sites. Thus, the classification procedure applied here allows us to obtain a significantly larger observational database than the initial classification proposed by Airbase.
The concept of dataassimilation was discussed and two of the advanced techniques, mainly EnKF and PF were explained in detail. Considering that DA techniques are used mainly for reducing the uncertainty, there is still a lack of consensus in hydrologic community on the selection and implementation of a suitable land DA method to meet this need. It needs to be realized that contemporary DA methods are used for estimating the state variables (here, soil moisture, SWE or SCA), and uncertainties associated with them, however, not all sources of uncertainties are addressed in the assimilation process. These include uncertainty in modelparameters and also model structure which are ignored in DA implementations. Although in this paper we did not intend to provide a comprehensive review of all the dataassimilation methods, we focused on two of emerging techniques as reported by few studies in section 4. It was mentioned that the ensemble filtering using PF results to full representation of prognostic variable and even parameter probability distributions. The EnKF is limited to the linear updating rule as in the original Kalman filter and also assumption of Gaussian distribution of errors in observation and model. Considering that the soil moisture and snow water equivalent probability distribution significantly change over time and are often non-normal, the existing assumptions in EnKF limit its application in strongly nonlinear hydrologic models. Knowing the potentials of PFs, further implementation of PF as an alternative procedure for operational dataassimilation is suggested. The synthetic study through OSSE design as seen in section 4.1.3 is an appropriate procedure to judge about the merits of a certain technique for landdataassimilation and the method can be used to adequately quantify and minimize the hydrologic predictive uncertainty while including model parameter uncertainty in the whole scheme.
compared to other crops.
Under the act, overall crop area in the United States declines by 0.4%. Barley, oats, and sorghum decrease between 2.3% and 2.4%, whereas corn and soybeans decrease by 0.9% and 0.1%, respectively, in the same scenario. The carbon tax mostly impacts fertilizer and, thus, makes using marginal cropland unprofitable. US corn and sorghum exports decrease by 4.9% and 3.4%, respectively. The decrease in soybean exports is smaller than for corn at 0.8%. The largest change in US exports is observed for sunflower seeds with a decrease of 7.5%.
This work introduces a new variational Bayes dataassimilation method for the stochastic estimation of precipitation dynamics using radar observations for short term probabilistic forecasting (nowcasting). A previously developed spatial rainfall model based on the decomposition of the observed precipitation field using a basis function expansion captures the precipitation intensity from radar images as a set of ‘rain cells’. The prior distributions for the basis function parameters are carefully chosen to have a conjugate structure for the precipitation field model to allow a novel variational Bayes method to be applied to estimate the posterior distributions in closed form, based on solving an optimisation problem, in a spirit similar to 3D VAR analysis, but seeking approximations to the posterior distribution rather than simply the most probable state. A hierarchical Kalman filter is used to estimate the advection field based on the assimilated precipitation fields at two times. The model is applied to tracking precipitation dynamics in a realistic setting, using UK Met Office radar data from both a summer convective event and a winter frontal event. The performance of the model is assessed both traditionally and using probabilistic measures of fit based on ROC curves. The model is shown to provide very good assimilation characteristics, and promising forecast skill. Improvements to the forecasting scheme are discussed.