The performance of the NCAR Weather Research and Forecasting Model (WRF) as a West African regional-atmospheric model is evaluated. The study tests the sensitivity of WRF-simulated vorticity maxima associated with African easterly waves to 64 combinations of alternative **parameterizations** in a series of **simulations** in September. In all, 104 **simulations** of 12-day duration during 11 consecutive years are examined. The 64 combinations combine WRF **parameterizations** of cumulus convection, radiation transfer, surface hydrology, and PBL physics. Simulated daily and mean circulation results are validated against NASA’s Modern-Era Retrospective Analysis for Research and Applications (MERRA) and NCEP/Department of Energy Global Reanalysis 2. Precipitation is considered in a second part of this two-part paper. A wide range of 700-hPa vorticity validation scores demonstrates the inﬂuence of alternative **parameterizations**. The best WRF performers achieve correlations against reanalysis of 0.40–0.60 and realistic amplitudes of spatiotem- poral variability for the 2006 focus year while a parallel-benchmark simulation by the NASA Regional Model-3 (RM3) achieves higher correlations, but less realistic spatiotemporal variability. The largest favorable impact on WRF-vorticity validation is achieved by selecting the Grell–Devenyi cumulus convection scheme, re- sulting in higher correlations against reanalysis than **simulations** using the Kain–Fritch convection. Other **parameterizations** have less-obvious impact, although WRF conﬁgurations incorporating one surface model and PBL scheme consistently performed poorly. A comparison of reanalysis circulation against two NASA radiosonde stations conﬁrms that both reanalyses represent observations well enough to validate the WRF results. Validation statistics for optimized WRF conﬁgurations simulating the parallel period during 10 ad- ditional years are less favorable than for 2006.

Show more
21 Read more

In atmospheric flows at the relatively high resolution of T63, a further simplification occurs, namely, that the level-coupling is very weak, and so the problem is diagonal in PV level coordinates as well. This can be anticipated from the phenomenology of QG turbulence for wavenumbers much greater than the deformation scale, which is the case in our **simulations** since the deformation scale is around wavenumber 12. For such wavenum- bers, QG turbulence phenomenology predicts that the turbulent transfers are controlled by barotropic-like triads, and hence the vertical coupling vanishes. In other words, at these scales, the flow behaves as if it consists of two uncoupled layers. The off-diagonal elements of the matrices then vanish. The diagonal elements of these matrices all display prominent cusps near the truncation scale. The net dissipation matrix, M n , is practically zero until n ∼ 50, after which it rises sharply to the truncation scale at n = 63. The drain dissi- pation matrix, M d , and the stochastic backscatter matrix, F b , are more scale-selective, not being significant until n ∼ 57, after which they rise more sharply to effectively greater values than the net dissipation matrix elements. The degree to which the elements of M d and F b rise depends on the whether we have sufficiently integrated over the decorrelation time. Unfortunately, errors in estimating M d , whether due to sampling or specification of scale-independent decorrelation time, can lead to slightly negative eigenvalues for F b , at around the point where the cusp starts to appear. We therefore specify a determin- istic parameterization, using M n , for n < n c . In practice, this may not be necessary as the net dissipation below n about 50 is negligibly small in the atmospheric case. In the atmosphere, there is not much difference in the results of the LES using either the purely deterministic formulation or the stochastic formulation near the truncation scale. Both give excellent agreement between the LES and the higher resolution DNS. It was found to be sufficient to employ the m averaged (isotropized) matrices. This is because the flow is to a large extent isotropic near the truncation scale.

Show more
316 Read more

skewness of total water. These will be full prognostic quantities in that they are advected with the mean flow and have parameterized sources and sinks from the physical **parameterizations**.
CRM Simulation and Analysis Method
A 29-day simulation of the July 1997 IOP at the ARM SGP site in Oklahoma was performed by the University of California-Colorado State University two-dimensional (2D) CRM (Krueger 1988; Xu and Randall 1995). The characteristics of the model simulation include: 2 km horizontal resolution, 512 km total domain size, 34 vertical layers below 20 km on a stretched grid, a 5 category bulk microphysics scheme, a third order turbulence closure, and interactive radiation. The CRM is driven with the variational analysis developed by Minghua Zhang.

Show more
In this study we used a parcel model with detailed cloud microphysics to show that the threshold freezing approximation, used in many microphysics schemes, is unable to suitably represent homogeneous freezing in liquid clouds. The evolution of cloud is sensitive to the ﬁ nite rate of homogeneous nucleation well above 40°C, i.e., at temperatures where we traditionally assume only heterogeneous nucleation can produce ice in clouds. In some **simulations**, homogeneous freezing was active as warm as 30°C, which is considerably warmer ( > 8°C) than generally assumed to occur in clouds. A series of **parameterizations** based on CNT and constrained by the scatter in laboratory measurements was used to show that simulated clouds are sensitive to the chosen parameterization and therefore to the uncertainty in laboratory measurements.

Show more
For locations where calibration is not possible because of a lack of measured data, we perform a multiple regression us- ing on-site variables, i.e. mean annual air temperature, rela- tive humidity, precipitation, and altitude. The regressions are verified through a leave-one-out cross validation, which also gathers information about the possible errors of estimation. Most of the SMs, when executed with parameters derived from the multiple regressions, give enhanced performances compared to the corresponding literature formulation. A sen- sitivity analysis is carried out for each SM to understand how small variations of a given parameter influence SM per- formance. Regarding the L ↓ **simulations**, the Brunt (1932)

Show more
14 Read more

in literature formulations, and (ii) to determine by automatic calibration the site-specific parameter sets for L ↓ in SMs.
For locations where calibration is not possible because of a lack of measured data, we perform a multiple regression us- ing on-site variables, i.e. mean annual air temperature, rela- tive humidity, precipitation, and altitude. The regressions are verified through a leave-one-out cross validation, which also gathers information about the possible errors of estimation. Most of the SMs, when executed with parameters derived from the multiple regressions, give enhanced performances compared to the corresponding literature formulation. A sen- sitivity analysis is carried out for each SM to understand how small variations of a given parameter influence SM per- formance. Regarding the L ↓ **simulations**, the Brunt (1932)

Show more
14 Read more

The ensemble of **simulations** showed large variation among results of the various members, suggesting dis- parate performance in predicting LWC. From the qualitative validation, the Morrison microphysics and YSU PBL schemes gave the best results for LWC, with FAR 4%, POH 96%, FOM 33%, and POD 67%. For IWC, the best combination was the Goddard microphysics and YSU PBL schemes, with POD 67% and FAR 43%. In the quantitative validation, the Morrison microphysics scheme yielded superior results to the remaining **parameterizations**. PBL **parameterizations** were less important in that validation but were essential in ascer- taining the presence of absence of LWC. The Goddard microphysics scheme best detected IWC but tended to underestimate LWC. Although these results varied, the prediction models tended to overpredict the presence of IWC to the detriment of LWC, as evidenced by high FARs for IWC, whereas LWC is underestimated. Consequently, the numerical prediction models may underestimate the presence of liquid water in mixed‐ phase clouds, thereby minimizing the risk to aviation safety.

Show more
The long-term goal of these two closely related research projects is to formulate robust turbulence **parameterizations** that are applicable for a wide range of oceanic flow conditions.
OBJECTIVES
The primary objectives of these projects are to bridge the gap between **parameterizations**/models for small-scale turbulent mixing developed from fundamental direct numerical **simulations** (DNS) and grid turbulence experiments to geophysical scale models with an emphasis on making progress towards improved turbulent **parameterizations** in the ocean and (ii) to develop a quantitative understanding of the impact of obstacles on the lateral mixing of momentum and scalars in oceanic flows.

Show more
During the year 2006-07, we have complete the development and testing of the CBLAST wind-wave coupling parameterization for the next-generation high-resolution fully coupled atmosphere-wave- ocean model for tropical cyclone research and predictions. We have also conducted a number of coupled model **simulations** to develop and test a new sea-spray parameterization using wave energy dissipation, which is done in collaboration with Drs. Fairall and Bao. Here we summarize the major accomplishments by the PI team:

Correspondence to: A. Romanou (ar2235@columbia.edu)
Received: 20 June 2013 – Published in Biogeosciences Discuss.: 5 July 2013
Revised: 17 December 2013 – Accepted: 17 December 2013 – Published: 26 February 2014
Abstract. Sensitivities of the oceanic biological pump within the GISS (Goddard Institute for Space Studies ) climate mod- eling system are explored here. Results are presented from twin control **simulations** of the air–sea CO 2 gas exchange using two different ocean models coupled to the same atmo- sphere. The two ocean models (Russell ocean model and Hy- brid Coordinate Ocean Model, HYCOM) use different ver- tical coordinate systems, and therefore different representa- tions of column physics. Both variants of the GISS climate model are coupled to the same ocean biogeochemistry mod- ule (the NASA Ocean Biogeochemistry Model, NOBM), which computes prognostic distributions for biotic and abi- otic fields that influence the air–sea flux of CO 2 and the deep ocean carbon transport and storage. In particular, the model differences due to remineralization rate changes are com- pared to differences attributed to physical processes modeled differently in the two ocean models such as ventilation, mix- ing, eddy stirring and vertical advection. GISSEH(GISSER) is found to underestimate mixed layer depth compared to observations by about 55 % (10 %) in the Southern Ocean and overestimate it by about 17 % (underestimate by 2 %) in the northern high latitudes. Everywhere else in the global ocean, the two models underestimate the surface mixing by about 12–34 %, which prevents deep nutrients from reaching the surface and promoting primary production there. Conse- quently, carbon export is reduced because of reduced produc- tion at the surface. Furthermore, carbon export is particularly sensitive to remineralization rate changes in the frontal re- gions of the subtropical gyres and at the Equator and this sensitivity in the model is much higher than the sensitivity to physical processes such as vertical mixing, vertical advection

Show more
18 Read more

continue to increase. Also, discrepancies among model **simulations** appear to grow larger after day three as well. This phenomenon was mentioned previously in section 4.2.1 and most likely has to deal with lack of data assimilation in this version of WRF/Chem.
One of the variables with the largest differences between the three **simulations** is surface relative humidity. All three **simulations** appear to capture the diurnal variation RH, increasing values throughout the night and decreasing values throughout the day. R_Y shows a strong negative bias, especially on days three through five, but even on days one and two at some sites. In this particular study, values for RH given by R_Y are less than half of the observed values at times. Comparatively, the S_Y simulation performs very well when simulating RH values. This was not an expected development, as S_Y is a simple thermal diffusion scheme, with less physically rigorous dynamics for soil moisture. One would assume that models using a more dynamically complex scheme would be better able to accurately simulate atmospheric moisture content. This was not the case for this particular episode, as all **simulations** except S_Y demonstrated a net negative bias for RH **simulations**.

Show more
264 Read more

This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licenses/by/4.0/
Abstract
We investigated the performance of 12 different physics configurations of the climate version of the Weather, Research and Forecasting (WRF) Model over the Middle East and North Africa (MENA) domain. Possible combinations among two Planetary Boundary Layer (PBL), three Cumulus (CUM) and two Microphysics (MIC) schemes were tested. The 2-year **simulations** (December 1988-No- vember 1990) have been compared with gridded observational data and station measurements for several variables, including total precipitation and maximum and minimum 2-meter air tem- perature. An objective ranking method of the 12 different **simulations** and the selection procedure of the best performing configuration for the MENA domain are based on several statistical metrics and carried out for relevant sub-domains and individual stations. The setup for cloud microphys- ics is found to have the strongest impact on temperature biases while precipitation is most sensi- tive to the cumulus parameterization scheme and mainly in the tropics.

Show more
23 Read more

shallow cumulus regions, which is mostly due to the change in the anvil cloud fraction. 294[r]

19 Read more

Rogers–Ramanujan functions Elliptic functions
In his Lost Notebook, Ramanujan gave product expansions for a pair of weight two Eisenstein series of level ﬁve. We show that Ramanujan’s formulas are special cases of more general **parameterizations** for quintic Eisenstein series. In particular, we prove that the Eisenstein series for the Hecke subgroup of level ﬁve are expressible as homogeneous polynomials in two parameters closely connected with the Rogers–Ramanujan functions. Moreover, the coeﬃcients of each polynomial are symmetric in absolute value about the middle terms. Corresponding polynomial expansions for allied series, including Eisenstein series on the full modular group, are also derived.

Show more
20 Read more

Although our viewpoint in this paper is to consider Ensemble Kalman inversion as an optimization method, and evaluate it from this perspective, there is considerable insight to be gained from the perspective of Bayesian inversion; this is despite the fact that the the algorithm does not, in general, recover or sample the true Bayesian posterior distribution of the inverse problem. Algorithms that can, with controllable error, approximately sample from the true posterior distribution are commonly referred to as fully Bayesian, with examples including Markov Chain Monte Carlo and sequential Monte Carlo. Ensemble Kalman inversion is not fully Bayesian but the link to Bayesian inversion remains important as we now explain. There is considerable literature available about methods to improve fully Bayesian approaches to the inverse problem through for example, geometric and hierarchical **parameterizations** of the unknown. The purpose of this paper is to demonstrate how these ideas from Bayesian inversion may be used with some success to improve the capability of ensemble Kalman methods considered as optimizers. In view of the relatively low computational cost of the ensemble methods, in comparison with fully Bayesian inversion, this cross-fertilization of ideas has the potential to be quite fruitful.

Show more
35 Read more

can be shown to run in time O(n w · w) for a poset with n elements and width w (the size of the largest anti-chain). Interestingly, none of these efforts has so far led to an fpt algorithm.
We believe that this uncertainty about the exact complexity status of counting linear extensions with respect to these various **parameterizations** is at least partly due to the fact that we deal with a counting problem whose decision version is trivial, i.e., every poset has at least one linear extension. This fact makes it considerably harder to show that the problem is fixed-parameter intractable; in particular, the usual approach based on parsimonious reductions fails. On the other hand, the same predicament makes studying the complexity of counting linear extensions significantly more interesting, as noted also by Flum and Grohe [16]:

Show more
28 Read more

We have given the first parameterized intractability result for counting linear extensions. We hope that the employed techniques will inspire similar results and expand our knowledge about the parameterized complexity of counting problems. In particular, even for #LE there remain many open questions concerning other very natural **parameterizations** such as the width of the poset or the treewidth of the poset graph. Moreover, our intractability result for the treewidth of the cover graph poses the question whether there are stronger **parameterizations** under which #LE becomes tractable, e.g., the treewidth of the poset graph, the treedepth or even vertex cover number of the poset- or cover graph, as well as combinations of these parameters with parameters such as the width, the dimension, or the height of the poset. These numerous examples illustrate that the parameterized complexity of #LE is still largely unexplored. As a side note it would also be interesting to establish whether our hardness result for #LE can be sharpened to #W[1]-hardness and to obtain matching membership results.

Show more
18 Read more

In particular, we study the complexity of #LE parameterized by the wellknown decompositional parameter treewidth for two natural graphical representations of the input poset, i.e., the c[r]

27 Read more

To overcome this limitation, we propose the system level approach to controller synthesis, which is composed of three elements: System Level **Parameterizations** (SLPs) , System Level Constraints (SLCs) and System Level Synthesis (SLS) problems. This paper is organized as follows: in §II, we define the system model considered in this paper, review relevant results from the distributed optimal control and QI literature, provide a motivating example for a system level approach, and present a survey of our main results. We then define and analyze SLPs in §III, which provide a novel parameterization of all internally stabilizing controllers and the closed loop system responses that they achieve. In §IV, we provide a catalog of useful SLCs that can then be imposed on these system responses and the controllers that achieve them – we further show that by using SLPs and SLCs, we can parameterize a set of constrained stabilizing controllers that are a strict superset of those that can be parameterized using quadratic invariance, hence generalizing the QI framework. Finally, we use SLPs and SLCs to formulate the SLS problem, and show that it defines the broadest known class of constrained optimal control problems that can be solved using convex programming.

Show more