Top PDF Calibration of Microsimulation Models for Multimodal Freight Networks

Calibration of Microsimulation Models for
Multimodal Freight Networks

Calibration of Microsimulation Models for Multimodal Freight Networks

This research presents a framework for incorporating the unique operating characteristics of multi-modal freight networks into the calibration process for microscopic traffic simulation models. Because of the nature of heavy freight movements in US DOT Region VII (Nebraska, Iowa, Missouri, Kansas), the focus of the project is on heavy gross vehicles (HGV), or, trucks. In particular, a genetic algorithm (GA) based optimization technique was developed and used to find optimum parameter values for the vehicle performance model used by “Verkehr In Staedten-SIMulationmodell (VISSIM),” a widely used microscopic traffic simulation software. At present, the Highway Capacity Manual (HCM), which is the most common reference for analyzing the operational characteristics of highways, only provides guidelines for highway segments where the heavy vehicle percentages are 25 or less. However, significant portions of many highways, such as Interstate 80 (I-80) in Nebraska, have heavy vehicle percentages greater than 25 percent. Therefore, with the anticipated increase in freight-moving truck traffic, there is a real need to be able to use traffic micro-simulation models to effectively recreate and replicate situations where there is significant heavy vehicle traffic. The procedure developed in this research was successfully applied to the calibration of traffic operations on a section of I-80 in California. For this case study, the calibrated model provided more realistic results than the uncalibrated model (default values) and reaffirmed the importance of calibrating microscopic traffic simulation models to local conditions.
Show more

71 Read more

Traffic fundamentals for A22 Brenner freeway by microsimulation models.

Traffic fundamentals for A22 Brenner freeway by microsimulation models.

conditions were guaranteed. In order to test the traffic microsimulation model validity, some model parameters were changed and adjusted until the model outputs were similar to empirical data. It is noteworthy that the calibration of a microsimulation model is an iterative process which can be stopped only when the model matches locally observed conditions (Barcelo, 2011). In a previous research a statistical approach including hypothesis testing using t-test and confidence intervals was used to measure the closeness between empirical data and simulation outputs for a test freeway segment under uncongested traffic conditions (Mauro et al., 2014). The lnV-D 2 regressions for simulated and empirical data were compared. Thus the microsimulation model was able to reproduce the real phenomenon of traffic flow within a wide enough range of operations, from the free flow speed conditions until almost to the critical density. However, in this study further considerations have been developed.
Show more

169 Read more

Calibration and Validation of Strategic Freight Transportation Planning Models with Limited Information

Calibration and Validation of Strategic Freight Transportation Planning Models with Limited Information

The validation step in regional, national or international multimodal freight models is, however, considerably less documented, while it has a very important impact on the level of confidence one can have in a model (de Jong et al. [1]). An interesting discus- sion about the reasons why calibration and validation of such models is difficult can be found in Zhang [2]. She points out the large number of (elements in each) variable(s) or the lack of availability of reference data. She cites two papers in which the authors have put an effort in validating their model. The first, Jourquin [3], performs a valida- tion by comparing the modelled modal shares of road, rail, and inland waterway trans- port to the observed ones, per category of commodities. If this indicates that each mod- ality in the network bears the right amount of freight flows, it does not guarantee that the flows are appropriately assigned to the right routes. The second paper, by Yamada et al . [4], also presents a model with three modes (road, rail, and sea) and two types of users (freight and passenger). The modal split estimated for this model was validated by comparing the modelled link flows with the actual link traffic counts, but the node flows were not calibrated or validated. As a result, when the model is used for node flow estimation, it is difficult to assess the validity or reliability of the results. Therefore, Zhang proposes her own freight model for road, rail and inland waterways in the Netherlands, calibrated and validated at the mode, route and node levels.
Show more

18 Read more

Accelerating the calibration of stochastic volatility models

Accelerating the calibration of stochastic volatility models

When implementing a calibration algorithm for an option pricing model with known charac- teristic function of the asset’s return, one has to choose a method for pricing vanilla options. In this paper we compare the following methods: (1) Direct integration, (2) Fast Fourier Transform (FFT), (3) Fractional FFT. Before choosing one of these techniques, it is im- portant to consider all possible ways of improving accuracy and calculation speed of each of these methods. These improvements can include mathematical modifications as well as implementation techniques. In this paper we compare optimized implementations of the calibration algorithm based on each of the above mentioned valuation methods. It helps to identify the factors which are most important for accuracy and speed of calibration.
Show more

20 Read more

Calibration of Multicurrency LIBOR Market Models

Calibration of Multicurrency LIBOR Market Models

For calibration purposes, the multi-currency LMM is defined on a maturity time grid T j = jδ, for j = 0, . . . , n f , where δ is the greatest common divisor of the market LIBOR forward times over all currencies. For instance, USD LIBORs refer to period of 3 months, whereas EUR and JPY LIBORs to a period of 6 months, which gives a δ of 3 months. The model forward LIBORs thus all have an accrual period of length δ, even though some market forward LIBORs have accrual periods of integer multiples of δ. The n f forward LIBORs in the discrete–tenor LIBOR Market Model for each currency are assumed to have deterministic volatility λ(t, T j ) . 6 The stochastic dynamics are assumed to be driven by a d I -dimensional standard Brownian motion W , i.e. the λ(t, T j ) are d I -dimensional vector–valued functions of t, and assumed to be piecewise constant on the calendar time grid t = (t 0 = 0, t 1 , . . . , t n c ) . Thus we have for each currency a volatility matrix Λ = {λ i,j,k } ∈ R
Show more

26 Read more

Optimal calibration for exponential Levy models

Optimal calibration for exponential Levy models

Europ ean option, jump diusion, minimax rates, severely ill-p osed, nonlinear inverse.. problem, sp etral ut-o...[r]

28 Read more

Uncertainty quantification and calibration of physical models

Uncertainty quantification and calibration of physical models

very few work discussed about the uncertainty quantification. Uncertainty analysis is the study about how the distribution of the outputs depends on the inputs and parameters. The quantification of uncertainty provides us the confidence level in the estimation of outputs and how robust the conclusion of the model results. Also, we could assess the efficiency of various models based on their corresponding uncertainty levels and decide weight on different models. Further study of how much uncer- tainty might be induced by learning about some specific inputs is called a sensitivity analysis. It tells us the source of the uncertainties and what are more important to know. [Zhuang et al., 2009] estimated NPP and their associated uncertainties using geospatial statistical approaches.
Show more

91 Read more

Calibration of Laboratory Models in Population Genetics

Calibration of Laboratory Models in Population Genetics

Given the above, Diamond’s claim about the greatest weakness of laboratory experiments is weakened given the calibration strategy. That is, what calibration does is provide a practical articulation of what Griesemer and Wade (1988, p. 77) claim is the character of the inference from causes observed in the laboratory to causes in nature. Their idea is that the scientist finds matches between the causes in the laboratory and causes in nature. My view is that calibration of laboratory populations to natural populations is a key experimental strategy for demonstrating those matches. According to Griesemer and Wade (1988), analogical reasoning drives the matching between laboratory model and natural system. I have no dispute with Griesemer and Wade about the analogical character of the matching inference. My aim has been to specify the practical strategy used for making that inference and how to view that strategy as warranting the claim being made. The ensuing discussion of claim (b) above, that calibration increases the justificatory strength of empirical claims based on laboratory experiments via its connection with specific elements of Lloyd’s (1987, 1988) confirmation view, fleshes out my claim about
Show more

38 Read more

Robust Analysis of Freight Comprehensive Transportation Networks

Robust Analysis of Freight Comprehensive Transportation Networks

Freight comprehensive transportation networks (FCTNs) are complex networks, such as railways, highways, waterways and other means of transportation. Since many uncertainties arise from demands and supplies of goods, the network structure, and other external causes such as earthquakes, floods, and fires et al. [1, 2], the equilibrium of the freight flows might be severely affected. Thus, to evaluate the ability of FCTNs of maintaining their function or performance under these uncertainties (i.e., robustness), has become more and more important.

7 Read more

Combining microsimulation and CGE models: Effects on equality of VAT reforms

Combining microsimulation and CGE models: Effects on equality of VAT reforms

Concerning item 9), we use the percentage change in net return on real capital for limited liability companies as an approximation to the percentage change in dividends. As was the case for items 5) to 8), we employ the purchaser price index of new investments exclusive of VAT and the investment tax. In addition, we assume that other taxes than the VAT and the investment tax paid by the limited liability company constitute a constant share of the net return on real capital. We may then think of the percentage change in the expression above as representing the percentage change in the after-tax net return on real capital in limited liability companies. As stated earlier, the microsimulation model only applies to personal taxes.
Show more

52 Read more

Topological Connectivity Analysis on Freight Transportation Networks

Topological Connectivity Analysis on Freight Transportation Networks

Abstract. The connections between nodes in Freight Transportation Networks (FTNs) are highly complex. With the topological modelling of real world FTNs, this paper introduces quantitative indicators, such as degree distribution and edge/node betweenness centrality, to analyze the topological connectivity. The exponential laws of degree distribution and edge/node betweenness centrality are shown by numerical results on the coal transportation network in Shanxi, China.

5 Read more

Artificial neural networks in freight rate forecasting

Artificial neural networks in freight rate forecasting

The objective of this paper is to forecast BPI movements, using historical time-series data and ANNs. ANNs present a layered structure consisting of one input, one output and one or more hidden layers, situated between the input and output layers (Li and Parsons, 1997). Each layer consists of neurons, which are the main formational elements of the ANNs. Neurons are interconnected by signal paths (with weights), with each neuron calculating its output through its activation function (transfer function), which can be linear or non-linear (usually sigmoidal). A neuron’s input is a transformed linear combination of the outputs of the neurons in the layer under it (Montgomery et al., 2015). Various recent studies have demonstrated the classification and predictive power of ANNs (e.g. Oancea and Ciucu, 2013). ANNs present a data-driven and self-adaptive method, which can learn from examples, and apprehend subtle functional relationships, especially when these relationships are unspecified. Moreover, ANNs are suitable in cases where there is a large number of relevant datasets, but the solution to the problem is difficult to specify (Zhang et al., 1998). ANNs can generalise, and thus draw correctly the inferences of the unseen part of the data, even when the sample data contains noisy information. ANNs are thus universal functional approximators, and can approximate any continuous function to the desired level of accuracy. A neural network has been shown to be a flexible technique, containing many parameters, and having the advantage of fitting well any historical model. Conventional forecasting models, instead, have restrictions in determining the underlying function, because of the convolution of the involved real system. In this regard, ANNs’ universal function approximation capability is a valuable alternative in addressing such restrictions (Zhang et al., 1998). In order to analyse the forecasting performance of FFAs, two different models of dynamic NNs (Hagen et al., 2014) are employed and compared 3 in Section 3.2. In this paper, the Neural Network Toolbox Version 17a, in MATLAB’s numerical computing environment and programming language, is used to facilitate the calculation of future freight rates. Output and input in ANN models refer to outcomes obtained from the models and input data for training the models, respectively.
Show more

31 Read more

Gravity law in the Chinese highway freight transportation networks

Gravity law in the Chinese highway freight transportation networks

The gravity law has been documented in many socioeconomic networks, which states that the flow between two nodes positively correlates with the strengths of the nodes and negatively correlates with the distance between the two nodes. However, such research on highway freight transportation networks (HFTNs) is rare. We construct the directed and undirected highway freight transportation networks between 338 Chinese cities using about 15.06 million truck transportation records in five months and test the traditional and modified gravity laws using GDP, population, and per capita GDP as the node strength. It is found that the gravity law holds over about two orders of magnitude for the whole sample, as well as the daily samples, except for the days around the Spring Festival during which the daily sample sizes are significantly small. Accordingly, the daily exponents of the gravity law are stable except during the Spring Festival period. The results also show that the gravity law has higher explanatory power for the undirected HFTNs than for the directed HFTNs. However, the traditional and modified gravity laws have comparable explanatory power.
Show more

28 Read more

EPIDEMIOLOGY OF OVERWEIGHT AND OBESITY OF TRAITORS OF THE MULTIMODAL FREIGHT MANAGEMENT OFFICE OF THE CITY PROVINCE OF KINSHASA

EPIDEMIOLOGY OF OVERWEIGHT AND OBESITY OF TRAITORS OF THE MULTIMODAL FREIGHT MANAGEMENT OFFICE OF THE CITY PROVINCE OF KINSHASA

veloped countries, it is about affluent populations, and is therefore a sign of success and wealth. Obesity, a multifactorial disease, is now considered a pandemic characterized by a metabolic disorder resulting from an accumulation of excess fat in the body and whose con- sequences can be harmful to health. It is a progressive chronic disease. It constitutes a serious risk factor that compromises the psychosocial functioning and the qu- ality of life of patients who suffer from it (7). The prev- alence overweight and obesity among Kinshasa work- ers is not very well known. It is to fill this gap that the present study was undertaken in a Multimodal Trans- port company in the Democratic Republic of Congo, in this case the Office of Management of Multimodal Fre- ight, OGEFREM in acronym.
Show more

6 Read more

Multimodal and Multi view Models for Emotion Recognition

Multimodal and Multi view Models for Emotion Recognition

attention models H-MM-2 and H-MM-3 by about 1% of UA. This means that the attention mecha- nisms are more complementary than overlapping. Attention visualization. For the modality-based attention, the vector z from Eq. 1 determines how much acoustic information will go through the next layers, whereas (1 − z) is the amount of lexical data allowed. Figure 3 provides a visual- ization of these vectors. The bars show the amount of information that is captured from one modal- ity versus the other. For instance, the sample “oh my gosh” illustrates that the words rely on more acoustic than lexical information. Intuitively, this phrase by itself could describe different emotions, but it is the acoustic modality that mitigates the ambiguity. Regarding the context-based attention, Figure 3 shows the places where the model focuses along the utterance. For large-context utterances, where the acoustic features are more or less sim- ilar, the semantics can help to highlight specific spots. For example, in the second sentence on the right of Figure 3, the model detects the seman- tics of the words sense and stupid and associates them with the words should, go, and army. The attention mechanism not only emphasizes seman- tics but it also takes into account the acoustic fea- tures. In the same block of sentences, it is worth noting that the words primarily driven by acous- tics (e.g., sweatheart, oh god, sorry and yeah) are highlighted by the attention mechanism. These re- sults also align with the intuition that the attention mechanisms are complementary.
Show more

12 Read more

Multi Objective Calibration For Agent Based Models

Multi Objective Calibration For Agent Based Models

The final contents of the archive is a population of parameter sets that yield non-dominated pareto- optimal solutions to the calibration problem. With no way to weight one criteria against the other, there is no explicit way to choose between them. However in sophisticated models, there are often other char- acteristics which have not been included as measur- able criteria but allow the domain expert to choose between them. In our example, this is not the case and we simply select a parameter set in the middle of the range where both criteria are well satisfied. Figure 5 shows these parameter values.
Show more

6 Read more

Algorithms for Itinerary Planning in Multimodal Transportation Networks

Algorithms for Itinerary Planning in Multimodal Transportation Networks

The journey planning problem in an urban public transport network has been formulated as a shortest path problem with time windows on a multimodal time schedule network that optimizes lexicographically the en-route time, the number of interchanges and the total walking and waiting time (not necessarily in this order). This new formulation reflects the traveler’s decision taking problem in selecting the point in time to start his/her urban journey within a specified time window and the sequence of transport legs in order to reach destination in time. The incorporation of this flexibility in departing from the origin enhances the solution space of the problem providing efficient journey planning decisions. Lexicographical ordering is utilized for evaluating the alternative itineraries instead of applying the time consuming task of determining the efficient frontier of the problem. This
Show more

12 Read more

Spectral calibration of exponential Lévy Models [2]

Spectral calibration of exponential Lévy Models [2]

The calibration of financial models has become rather important topic in recent years mainly because of the need to price increasingly complex options in a consistent way. The choice of the underlying model is crucial for the good performance of any calibration procedure. Recent empirical evidences suggest that more complex models taking into account such phenomenons as jumps in the stock prices, smiles in implied volatilities and so on should be considered. Among most popular such models are Levy ones which are on the one hand able to produce complex behavior of the stock time series including jumps, heavy tails and on other hand remain tractable with respect to option pricing. The work on calibration methods for financial models based on L´evy processes has mainly focused on certain parametrisations of the underlying L´evy process with the notable exception of Cont and Tankov (2004). Since the characteristic triplet of a L´evy process is a priori an infinite-dimensional object, the parametric approach is always exposed to the problem of misspecification, in particular when there is no inherent economic foundation of the parameters and they are only used to generate different shapes of possible jump distributions. In this work we propose and test a non-parametric calibration algorithm which is based on the inversion of the explicit pricing formula via Fourier transforms and a regularisation in the spectral domain.Using the Fast Fourier Transformation, the procedure is fast, easy to implement and yields good results in simulations in view of the severe ill-posedness of the underlying inverse problem.
Show more

20 Read more

Simultaneous calibration of hydrological models in geographical space

Simultaneous calibration of hydrological models in geographical space

As defined, the water-balance-related parameter η is specific for each catchment and each model parameter vector. There- fore, each individual catchment has a large variation in η for the calibrated 10 000 parameter sets. Also, for the same set of good parameters that match different water balances, dif- ferent catchments always require very different η values to control actual evapotranspiration. Parameter η is estimated because it controls the water balance and can be estimated at other catchments. The remainder of the parameters (the dynamic ones) are regionally calibrated (all catchments are given the same parameter set). Therefore, only η varies be- tween catchments. As η is specific for each parameter vector, regionalization of η directly is not feasible and η remains dif- ferent for different parameter vectors after regionalization. In the numerical experiments, in order to estimate water balance parameter η, the long-term discharge volumes were treated as known variables for both gauged and ungauged catchments. For application in practical systems, the long-term discharge volumes have to be estimated for ungauged catchments. This problem is not explicitly treated in this paper. The estimation of parameter η is a limitation of the presented simultaneous calibration approach. Regionalization of long-term discharge volumes is a prerequisite for the application in ungauged basins. For the study area, the discharge coefficients which relate discharge volumes to (known) precipitation show quite a smooth spatial behavior as shown in Fig. 14. Thus, the re- gionalization of this parameter does not seem to be an ex- tremely complicated task in this particular region. According to the previous analysis of η, for each common dynamical pa- rameter set, one can have a possible estimator of η for a cer- tain catchment based on the regionalization of discharge co- efficients. The potential application of this approach in other regions needs to be investigated in future work.
Show more

16 Read more

Fitting Nonlinear Calibration Curves: No Models Perfect

Fitting Nonlinear Calibration Curves: No Models Perfect

It is well known that when a wrong equation is fitted to data, the shape and the pattern of the residual plot contain valuable information that can be used to determine the way [13]-[20] in which the equation should be modified to achieve a better description of the data. So, residuals provide a convenient means of checking whether the calibration data is actually linear [21] [22] [23] [24]. The residuals are the vertical distances indicated in the y -direction between the points and the regression line (which gives a minimum sum of their squares) [21]. No rigorous mathematical treatment is required. If there is a true linear re- lationship between the variables with the error symmetrically distributed, re- siduals will be scattered randomly above and below zero, an equal number of plus and minus. Systematic deviations may indicate either a systematic error in the experiment or an incorrect or inadequate model. A curvilinear pattern in the residuals plot means that a non-linear curve, containing higher order terms, will be better fitted. A linear trend (descending or ascending) may indicate that an additional term in the model is needed. The “fan-shaped” residual pattern shows that experimental error increases with mean response (heteroscedasticity) so the constant variance assumption is inappropriate [21]. This last should be approach by weighted least squares method or by transforming the response.
Show more

17 Read more

Show all 10000 documents...