Abstract. The GEOS-Chem global chemical transport model (CTM), used by a large atmospheric chemistry research com- munity, has been re-engineered to also serve as an atmo- spheric chemistry module for Earth system models (ESMs). This was done using an Earth System Modeling Framework (ESMF) interface that operates independently of the GEOS- Chem scientific code, permitting the exact same GEOS- Chem code to be used as an ESM module or as a stand- alone CTM. In this manner, the continual stream of updates contributed by the CTM user community is automatically passed on to the ESM module, which remains state of sci- ence and referenced to the latest version of the standard GEOS-Chem CTM. A major step in this re-engineering was to make GEOS-Chem grid independent, i.e., capable of us- ing any geophysical grid specified at run time. GEOS-Chem data sockets were also created for communication between modules and with external ESM code. The grid-independent, ESMF-compatible GEOS-Chem is now the standard ver- sion of the GEOS-Chem CTM. It has been implemented as an atmospheric chemistry module into the NASA GEOS- 5 ESM. The coupled GEOS-5–GEOS-Chem system was tested for scalability and performance with a tropospheric oxidant-aerosol simulation (120 coupled species, 66 trans- ported tracers) using 48–240 cores and message-passing in- terface (MPI) distributed-memory parallelization. Numeri- cal experiments demonstrate that the GEOS-Chem chemistry module scales efficiently for the number of cores tested, with no degradation as the number of cores increases. Although inclusion of atmospheric chemistry in ESMs is computa-
We have conducted a comparison of an EnKF and 4DVar data assimilation system using a comprehensive stratospheric chemical transport model. We considered 4D-Var and EnKF configurations that are normally used for chemical data as- similation applications. Both data assimilation systems have online estimation of error variances based on the Desroziers method and share the same correlation model for all pre- scribed error correlations (i.e. the background error covari- ance for 4D-Var, initial error and model error for EnKF) so that each data assimilation system is nearly optimal and can also be compared to each other. A previous compari- son study by Skachko et al. (2014) showed that for chemical tracer transport only both assimilation systems provide re- sults of essentially similar quality despite the difference in practical implementation of each method: the 4D-Var was applied in its strong constraint formulation with a 24 h as- similation window with the assumption of no model error over this period, whereas the EnKF was used to sequentially assimilate observations every 30 min with model error per- turbations added every 30 min. This study examines in what way the inclusion of chemistry changes the performance of the assimilation systems, but perhaps more importantly how an EnKF and a 4D-Var chemical data assimilation can be im- plemented in a real-life situation with several modelled and assimilated species. In this study we assimilate ozone, HCl, HNO 3 , H 2 O and N 2 O observations from EOS Aura MLS.
16 Read more
Abstract. This paper discusses the implementation and per- formance of an array of gas-phase chemistry solvers for the state-of-the-science GEOS-Chem global chemical transport model. The implementation is based on the Kinetic PrePro- cessor (KPP). Two perl parsers automatically generate the needed interfaces between GEOS-Chem and KPP, and al- low access to the chemical simulation code without any addi- tional programming effort. This work illustrates the potential of KPP to positively impact global chemical transport mod- eling by providing additional functionality as follows. (1) The user can select a highly efficient numerical integration method from an array of solvers available in the KPP library. (2) KPP offers a wide variety of user options for studies that involve changing the chemical mechanism (e.g., a set of addi- tional reactions is automatically translated into efficient code and incorporated into a modified global model). (3) This work provides access to tangent linear, continuous adjoint, and discrete adjoint chemical models, with applications to sensitivity analysis and data assimilation.
The evaluation methodology is divided into two major parts. First, the LBCs from the global EMEP model are directly compared at the lateral boundaries of the Euro- pean model domain, with satellite retrievals from the Terra- MOPITT (Measurements Of Pollution In The Troposphere) and Aura-OMI instruments. Second, the study investigates the impact of LBCs on regional concentration fields by ap- plying the LBCs from the global CTM, to the regional CTM, MATCH. The MATCH model results are compared to satel- lite retrievals from the Aqua-AIRS (Atmospheric InfraRed Sounder) instrument as well as to ground-based measure- ments. Using the global CTM as LBCs gives the benefits of studying the impacts of using dynamical or climatologi- cal LBCs, which would not be possible if using satellite re- trievals, due to the time resolution. The latter part of the eval- uation is done by addressing the following questions: (i) how strongly are concentrations near the surface influenced by the LBCs? (ii) How are the concentrations influenced aloft in the troposphere at 500 hPa? (iii) What are the benefits of using dynamic vs. climatological LBCs?
17 Read more
The Oslo CTM3 is a global 3-D CTM, driven by 3-hourly meteorological forecast data from the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecast System (IFS) model, produced on a daily basis with 12 h of spin-up starting from an analysis at noon on the pre- vious day, which are then pieced together to give a uniform dataset for a whole year. It has been clear in the history of ECMWF forecasts that there is a spin-up time for the fore- casts during which short-lived phenomena like precipitation adjust to inadequate initialization from the analysis fields (Kraabøl et al., 2002; Wild et al., 2003). Forecasts also allow retrieval of additional meteorological data such as convective mass fluxes, which are not available from operational analy- ses. The 12-hour spin-up used here was based on ECMWF experience. The use of pieced forecasts is also imperative when running an externally forced CTM because much of the physics need to be integrated, not snapshots (e.g., 3-D clouds, convective fluxes, and precipitation fields), which are integrated from the forecast model. If analysis fields are used as the core meteorology in the CTM, then the CTM must in- clude the full general circulation model physics (unlike here). Winds, temperature, pressure and humidity are given as in- stantaneous values every 3 h, while other variables such as rainfall and convective fluxes are averages over the 3-h inter- val. The cycle used here is 36r1, starting from ERA-Interim re-analyses.
29 Read more
Lin and McElroy (2010). The wet deposition for water-soluble aerosols and for gases follows Liu et al. (2001) and Amos et al. (2012). Aerosol scavenging by ice crystals and cold/mixed precipitation is also reproduced in the model (Wang et al., 2011). The dry deposition is associated to a scheme which calculates bulk surface resistance in series (Wesely, 1989). Photolysis rates are calculated with the Fast-JX code (Bian and
58 Read more
effects on a sub-grid scale are represented via a fuel tracer in order to follow the amount of the emitted species in the plume and an effective reaction rate for the ozone production and nitric acid production and destruction during the plume’s dilution in the background (Cariolle et al., 2009; Paoli et al., 2011). The parameterization requires a proper estimation of the characteristic plume lifetime, during which the nonlin- ear interactions between species are important and simulated via specific rates of conversion. The approach ensures the mass conservation of species in the model. This is the only method which considers a plume evolution related to the lo- cal NO x emissions, allowing the transport of the nonlinear
24 Read more
Aerosols and clouds can enhance or reduce photolysis of rel- evant gas-phase chemical species in the atmosphere by re- flecting, scattering, or absorbing solar radiation. Modifica- tions of photolysis rates via this interaction lead to changes in the production rate of sulfuric acid, which lead directly to changes in the new-particle formation rates. Previous ver- sions of PMCAMx-UF employed a parameterization origi- nally used by the Regional Acid Deposition Model (RADM; Chang et al., 1987) to treat the modification of photolysis rates due to cloud presence. This approach required the cloud optical depth from the meteorological input data and the so- lar zenith angle in order to calculate the time- and layer- dependent adjustment factors for the photolysis rates. This method, however, did not use aerosol concentrations pre- dicted online by the transport model. Instead, a reference aerosol profile was used for every time step and column of grid cells.
14 Read more
Photolysis rates are calculated online at each chemical time step based on the two-stream method of Hough (1988), which considers both direct and scattered radiation. The scheme has total of 203 wavelength intervals from 120 to 850 nm, though only wavelengths above 175 nm are used for stratosphere–troposphere studies. These wavelength in- tervals are the same as those employed in the TOMCAT stratospheric chemistry scheme (Chipperfield et al., 2015; Sukhodolov et al., 2016). The top of the atmosphere so- lar flux spectrum is fixed in time and there is no account of, for example, the 11-year solar cycle in the standard model. This photolysis scheme is coupled with the TOM- CAT model by using the model temperature and ozone con- centration profiles. The scheme is also supplied with sur- face albedo, aerosol concentrations and monthly mean cli- matological cloud fields. This scheme was first used in this manner by Arnold et al. (2005). Previously, an offline ap- proach was used where photolysis rates were calculated of- fline and then read in to the model (e.g. Law et al., 1998). Where possible, photochemical data are taken from Sander et al. (2011) for species which are also relevant for the strato- sphere. Otherwise, photochemical data are generally taken from IUPAC (Atkinson et al., 2004b, 2006b). The UV ab- sorption cross sections for methyl hydroperoxide (MeOOH), which are from the Jet Propulsion Laboratory (JPL) (Sander
33 Read more
processes which lead to flux of the species into the atmo- sphere. However, direct measurements of emission rates over relatively small regions are difficult to make and are subject to significant errors when extrapolated to global scales (Jung et al., 2011). Process modelling, meanwhile, requires con- siderable understanding of the complex procedures which lead to the emissions, and may also be subject to extrap- olation errors similar to those which affect the direct flux measurements. Top-down methods, in contrast, attempt to estimate emissions using information about the atmospheric distributions of the species and knowledge of atmospheric transport. This method has the benefit that the assimilation of observational data provides a constraint on the surface flux, assuming that the representation of atmospheric trans- port and chemistry is accurate. However, limitations of the top-down approach include insufficient global observational coverage and modelling inaccuracies (Dentener et al., 2003; Mikaloff Fletcher et al., 2004; Chen and Prinn, 2005). If mea- surements of the isotopic composition of a species are in- cluded in a top-down emission estimate, it may be possible to partition the distinct emission processes of the species. How- ever, since we currently have relatively poor global coverage of these isotopic observations (Dlugokencky et al., 2011), it is necessary that bottom-up and top-down processes must be used in tandem in order to gain full understanding of trace gas emission budgets. Since an atmospheric model, such as a CTM, is generally used to characterise the atmospheric transport and chemistry in order to relate the concentration fields to the surface flux, top-down techniques are usually referred to as “inverse modelling”. This is in contrast to for- ward modelling, which relates surface flux estimates to at- mospheric concentration fields. The increasing availability of satellite measurements of atmospheric constituents provides a powerful data set for use in data assimilation techniques such as inverse modelling. This, together with ongoing de- velopments of available computational power, means that in- verse techniques are increasingly achievable.
16 Read more
Global Change (FRSGC) version of the University of California, Irvine (UCI) global chemical transport model (CTM) described by Wild and Prather . The model is driven by 3-hour meteorological fields generated with the European Centre for Medium-Range Weather Fore- casts (ECMWF) Integrated Forecast System (IFS) at a spectral resolution of T159 with 40 eta-levels in the vertical. These fields have been used at T21 (5.6 5.6) and T63 (1.9 1.9) resolution with 37 vertical levels in previous studies, [Wild et al., 2003, 2004; Hsu et al., 2004] and are also used here at T42 (2.8 2.8) and T106 (1.1 1.1) resolution. The different resolu- tions are run from the same T159 fields so that the dynamics are continuous from T21 to T106. The notable strengths of these pieced-forecast fields over other anal- ysis products available include their dynamical self-con- sistency, the use of integrated or averaged quantities, the range of diagnostics included, the longer spin-up to reduce analysis noise, and the 3-hour temporal resolution. [ 8 ] The performance of the model in simulating the
14 Read more
from 4 ◦ × 5 ◦ to 2 ◦ × 2.5 ◦ and another factor of 2 to a sin- gle nested simulation at 0.5 ◦ × 0.667 ◦ . The linearity from 4 ◦ × 5 ◦ to 2 ◦ × 2.5 ◦ implies that grid boxes are sufficiently large that CPU time is proportional to the number of grid boxes, and that transport integration time steps constrained by the Courant–Freidrich–Lewy criterion (Courant et al., 1967) are largely unaffected by changes to grid box size at these resolutions. Comparison of individual CPU times for chemical and transport operators shows that performing a single cycle of all chemical operations takes ∼ 4 times that of a single cycle of transport operations at the global scale. This factor is reduced for nested simulations due in part to the additional CPU time for simulating boundary conditions. Figure 2 illustrates the sensitivity of the simulations to chemical and transport operators at 2 ◦ × 2.5 ◦ horizontal reso- lution. The left column shows the species concentrations for the “true” simulation (C10T05). The middle column shows the difference in species concentrations from doubling the transport operator duration. This doubling is in practice a change in time truncation of the transport operator from T × C × T × T × C × T to T × C × T × C since the transport op- erator must keep pace with the chemistry operator. Increasing the transport operator duration tends to increase concentra-
13 Read more
model for all regions except the tropical boundary layer where the CTM uniformly underestimates the observed CO by about 12 ppb. In CTM sensitivity tests with a range of CO-like tracers (not shown here), we find that much of the observed variance (e.g., as measured by M - Q), including fine-scale features, is driven by large and synoptic-scale systems acting on the global-scale latitudinal gradients in CO, rather than by the nearby east Asian emissions. Thus we take this agreement to mean that the large-scale CO gradients and meteorological systems are well simulated. Above the 75th percentile, however, the simulations are uniformly much smaller than observed. One cause might be the failure of the CTM to resolve urban plumes, for example, the intense, small-scale pollution events such as the Shanghai plume [Russo et al., 2003; Simpson et al., 2003; Talbot et al., 2003]. However, for the distributions shown in the figure (CO < 300 ppb), the observed proba- bility distributions are unaffected by spatial filtering at the CTM resolution, and hence these probability distributions should be resolved by the model. Thus the uniform under- prediction of the CO probabilities at the upper end of the distribution as shown are likely due to an underestimate of CO emissions from east Asian sources [Palmer et al., 2003] or possibly to chemical influences rather than lack of model Table 1. Number of Data Points (N) for CO and O 3 From DC-8
16 Read more
Figure 4 also shows the FTCM performed using the amorphous cellulose (referred to as AC) specific probe (named CC17). Mechanical pulps had the strongest AC- binding signal, also in accordance with the explanation of its higher content in high specific surface areas such as fibrils. Although three of the four pulps exposed much less AC than CC, the opposite was observed for SK pulp, where twice as much AC was detected compared to CC. Clearly, the distribution of AC did not parallel CC distri- bution on the surface of untreated fibers. The total cellu- lose (CC and AC) detected at the surface was the lowest for SK pulp, where the fibrillations are almost nonexist- ent as was observed in Fig. 1. This leads to a decrease in high surface area fibrils or fiber fragments, which are primary targets for CBMs binding to fiber polymers. Despite containing more cellulose than CTM pulps, the Kraft pulps returned a weaker binding signal for both CC
14 Read more
For extracting meaningful topics from texts, their structures should be considered properly. In this paper, we aim to analyze structured time-series documents such as a collection of news articles and a series of scientific papers, wherein topics evolve along time depending on multiple topics in the past, and are also related to each other at each time. To this end, we propose a dynamic and static topic model, which si- multaneously considers the dynamic struc- tures of the temporal topic evolution and the static structures of the topic hierarchy at each time. We show the results of ex- periments on collections of scientific pa- pers, in which the proposed method out- performed conventional models. More- over, we show an example of extracted topic structures, which we found helpful for analyzing research activities.
http://retro.enes.org) for 26 species and is complemented by other species provided by the “Emission Database for Global Atmospheric Research – version 4.2” (EDGAR-4.2, http://edgar.jrc.ec.europa, Olivier et al., 1996, 1999). This version includes most direct greenhouse gases, 4 ozone pre- cursor species and 3 acidifying gases, primary aerosol par- ticles and stratospheric ozone depleting species. It also in- cludes urban/industrial information specifically for the South American continent based on local inventories (Alonso et al., 2010). Currently, all urban/industrial emissions are re- leased in the lowest model layer. However, if emissions from point sources (e.g., stacks) are available, information on stack heights can be easily included. For biomass burning, the PREP-CHEM-SRC includes emissions provided by the Global Fire Emissions Database (GFEDv2) based on Giglio et al. (2006) and van der Werf et al. (2006), or emissions can also be estimated directly from satellite remote sensing fire detections using the Brazilian Biomass Burning Emis- sion Model (3BEM, Longo et al., 2010) as included in the tool. In both cases, fire emissions are available for 107 dif- ferent species. The biomass burning emission estimate is di- vided into two contributions, namely smoldering, which re- leases material in the lowest model layer, and flaming, which makes use of an embedded on-line 1-D cloud model in each column of the 3-D transport model to determine the verti- cal injection layer. In this case, the cloud model is integrated using current environmental conditions (temperature, water vapor mixing ratio and horizontal wind) provided by the host model (Freitas et al., 2006, 2007, 2010). Biogenic emissions are also considered via the Global Emissions Inventory Ac- tivity of the Atmospheric Composition Change: the Euro- pean Network (GEIA/ACCENT, http://www.aero.jussieu.fr/ projet/ACCENT/description.php) for 12 species and derived by the Model of Emissions of Gases and Aerosols from Na- ture (MEGAN, Guenther et al., 2006) for 15 species. Other emissions include volcanic ashes (Mastin et al., 2009), vol- canic degassing (Diehl, 2009; Diehl et al., 2012), biofuel use and agricultural waste burning inventories developed by Yevich and Logan (2003).
17 Read more
Changes away from near-source areas are also evaluated in terms of BC concentrations by a comparison with observed vertical distribution from the HIPPO3 campaign, in which remote marine air over the Pacific was sampled across all latitudes (Sect. 2.5). To limit the number of model runs, we focus on only one phase of the HIPPO campaign here, but a more comprehensive evaluation of Oslo CTM3 vertical BC distribution against aircraft measurements was performed by Lund et al. (2018). Figure 6 shows observed average vertical BC concentration profiles against model results from each sensitivity test. Oslo CTM3 reproduces the vertical distribu- tion well in low latitudes and midlatitudes over the Pacific in its baseline configuration, although near-surface concen- trations in the tropics are underestimated. This is a signifi- cant improvement over Oslo CTM2, for which high-altitude concentrations in these regions were typically overestimated. The baseline configuration of Oslo CTM3 includes updates to the scavenging assumptions based on previous studies in- vestigating reasons for the high-altitude discrepancies (e.g., Hodnebrog et al., 2014; Lund et al., 2017). At high north- ern and southern latitudes, the model underestimates the ob- served vertical profiles in the baseline. Increasing the model resolution does not have any notable impact on the vertical profiles. There is a notable increase in high-latitude concen- trations when large-scale ice cloud scavenging is decreased. However, there is a simultaneous exacerbation of model per- formance in the other latitude bands, pointing to potential
23 Read more
Abstract. Thermochemical phenomena involved in cement kilns are still not well understood because of their complexity, besides technical difficulties in achieving direct measurements of critical process variables. This article addresses the problem of their comprehensive numerical prediction. The presented numerical model exploits Computational Fluid Dynamics and Finite Difference Method approaches for solving the gas domain and the rotating wall, respectively. The description of the thermochemical conversion and movement of the powder particles is addressed with a Lagrangian approach. Coupling between gas, particles and the rotating wall includes momentum, heat and mass transfer. Three-dimensional numerical predictions for a full-size cement kiln are presented and they show agreement with experimental data and benchmark literature. The quality and detail of the results are believed to provide a new insight into the functioning of a cement kiln. Attention is paid to the computational burden of the model and a methodology is presented for reducing the time-to-solution and paving the way for its exploitation in quasi- real-time, indirect monitoring.
19 Read more
our predicted concentration. During computation we allowed the inclusion of data points external to the squares out to a bu ﬀ er distance of 1200 m to minimize any discontinuity at the bound- aries when the squares were combined. This bu ﬀ er distance means that all estimation nodes within each subset square has a complete set of input data points within the search radius, even when the point is on the boundary of the subset square. We set the maximum search radius for each estimation node at 1,240 m so that the experimental covariance range was met, and to ensure that each node would include some data points (e.g., soft data for SatLUR were at the equally spaced 2500 m grid in addition to each of the Mesh Block centroids). We assumed that the mean was constant over the estimation neighborhood, after assessing the frequency distribution of our detrended residuals from the global o ﬀ set. The BMElib model returned estimates that represent the mean value which minimizes the estimation error variance of the posterior PDF. This value is on the scale of the residuals and so when added to the global o ﬀ set the result provides the estimated long-term NO 2 concentrations.
Hazardous contaminants which mainly remain buried deep in the earth’s crust are exposed to the atmosphere due to open cuts, waste deposits from the ore and the created tailing dams. These hazardous compounds travel through the drai- nage path and contaminate the groundwater as well as surface water in the vi- cinity of the mine site. Once these hazardous contaminants and their compo- nents interact with water (in from of rainfall or get collected in the tailings dam as a result of overland flow), generate Acid Rock Drainage (ARD) and Acid Mine Drainage (AMD). Although several remediation measures were underta- ken for this area, accurate contaminant sources characterization of ARD and AMD, physical and chemical behaviors of the contaminants in the groundwater system all along the transport pathway are required to achieve effective and effi- cient remediation strategies, and for optimal groundwater management.
24 Read more