The Midwest region of the United States is an important “hub” of the nation’s transportation systems. According to the 2002 Commodity Flow Survey by the Bureau of Transportation Statistics (BTS), more than 968 billion ton-miles, or about 31% of the total U.S. commodities originate, pass through, or arrive in the Midwest region (BTS 2005). The greater metropolitan areas of Memphis are particularly of significance. With regard to freight, the Federal Express Corporation (FedEx) worldwide headquarters and world hub are located in Memphis. The third largest U.S. cargo facility of the United Parcel Service, Inc. (UPS), also the only UPS facility capable of processing both air and ground cargo, is located in Memphis (Hanson 2007); and the Memphis International Airport has been the world’s busiest airport in terms of cargo traffic volume. On the passenger side, the City of Memphis and surrounding metropolitan area is one of the two major population centers in the Midwest U.S. The greater Memphis metropolitan area, however, is one of the most vulnerable regions to seismic hazards in the U.S. The aging infrastructure and many unreinforced buildings would sustain significant damage and more than one million population severely impacted. A catastrophic NMSZ earthquake could not only disrupt the direct functioning of the Memphis metropolitan area but also have ripple effects throughout the nation’s economy and society.
earthquakeengineeringapplications such as vibration modes, time history analysis and structural damping can be explained by performing “hands-on” experiments to the civil engineer candidates. Energy dissipation systems are among the latest technologies for the earthquake resistant design and retrofitting. Seismic isolation is an innovative earthquake protection system which is installed in between the structure and its foundation. The isolated structure responds elastically under seismic motion hence the seismic isolation device tends to reduce the structural damage. Small scale dynamic tests on model structural models are quite practical to simulate the dynamic effects to the structures. Moreover it is possible to derive some important results related with the dynamic behavior of the seismically isolated structure. There are many techniques for seismic isolation of the structures such as lead rubber bearing, high damping bearing and friction pendulum bearings. The idea of the friction pendulum bearings is to enforce the foundation of the structure to dissipate the input earthquake energy by the friction mechanism taking place in between two similar shaped steel concave surfaces .
The applications described in Chapter 3 and 4 show that the Bayesian learning method using the ARD prior plays an important role in a feature selection algorithm when the features extracted from measurements or observation are utilized as inputs via a sum of basis functions with unknown parameters as coefficients. Using model class selection to find the optimal hyperparameters (variances) in the prior, some of the coefficients become zero (Gaussian with zero mean and zero variance), thereby pruning out terms that prove to be irrelevant for predictions, as determined from the data. Therefore, one can choose the strategy to just let the algorithm sort out the relevant terms by initially including all seemingly relevant terms.
can not be separated from the site specific building properties and earthquake excitation Q, meaning that yield curvature of the element can not be written as a function of element properties only. The yield curvature of the element depends also on the current axial force that is the function of the site specific seismic loading (Q) and the specific building properties (X), that is, dead and live load distribution and force paths in the load-bearing structure. Therefore, the general (independent of site conditions) form of the limit-state function is incompatible with the exact limit-state function (4.6), implying that Method 2 can not be used with the exact limit-state function because of the generality requirement for the fragility functions. Some ways to reduce the error caused by inexactness of the limit-state function are explored in Section 4.2.5. In particular, it is shown that by selecting an appropriate fragility function for the uncoupled damage estimation, the discrepancy between a safety region defined by the exact limit-state function and a safety region defined by the inexact one can be reduced.
The Seismic Alert System of Mexico (SASMEX) is being conformed by the Seismic Alert System of Mexico City (SAS), pioneer in public earthquake early warning services and the Seismic Alert System of Oaxaca City (SASO), with a set 12 and 36 sensing ﬁeld stations, respectively. Today, the alerting functions of these systems are currently in the process of integration. SAS started in 1991 and SASO in 2003, have emitted 78 early warnings of more than 2350 earthquakes detected by their sensing ﬁeld stations. Authorities of Mexico Federal District (GDF) and the State of Oaxaca, since 2003 agreed with Mexico’s Ministry of the Interior (SEGOB), coordinate necessary to improve their seismic alert systems and join their roles to make must efﬁcient possible disaster mitigation activities that might cause strong Mexican earthquakes. This paper describes the main current applications of SAS and SASO: schools, radio and TV commercial broadcast, and the Alternate Emitters of Seismic Alert System (EASAS) operating in Acapulco and Chilpancingo both of Guerrero state. To reach better efﬁciency in the seismic warning delivery, since 2009 we are applying NWR-SAME receivers in Mexico City Valley, and testing their codes enhancement to obtain the expedite earthquake warning issuing. Also the GDF is promoting to increase the observation capacity of the seismic danger around Mexico City, to detect it and warn any strong effect, reducing a new possible seismic disaster, and recently the SEGOB, hereby their General Coordination of Civil Protection, agreed to extend an SASMEX sensor coverage between Chiapas and Jalisco states to issuing earthquake warning signal to reduce their population vulnerability.
The Caltrans Seismic Design Criteria specifies using uncorrected SPT N-values for site classification [Caltrans 2006]. It is common geotechnical practice to correct field SPT N-values for variations from standard practice (i.e., hammer energy, sampler type, borehole diameter, and rod length). For some applications, it is also common practice to normalize N-values to a reference overburden stress (typically, 1 atmosphere). For the purpose of site classification, it is appropriate to apply correction factors intended to account for deviations from the standard test method, such as hammer energy or non-standard samplers, but not appropriate to normalize N- values by the overburden pressure. In addition to site classification, V S may be required for site-
Network reliability is defined as the probability that the network remains its connectivity and functionality over a given period of time. Connectivity depends on the post-earthquake network completeness, thus is a suitable goal for short term emergency response and providing humanitarian aid. A pioneer study, presented by Augusti, et al. (1998), provided a reliability based method to prioritize the maintenance strategies for deteriorating bridges in a simple series-parallel system. M. Liu and Frangopol (2006) provided a bridge network maintenance method that considered time-dependent structural reliability prediction, highway user cost and bridge life cycle cost. However, they assumed that there is no correlation among bridge failures, and used unrealistic same traffic pattern for all scenarios. Bocchini and Frangopol (2011) assessed network life- cycle performance and used a time variant reliability model for individual bridge. They performed transportation network analysis for every combination of bridge service states in a small six-node network. System travel delay is one of the most commonly used system performance metrics for transportation networks. It provides information on highway user costs and is suitable for evaluating long term economic effects. This metric has been widely used to assess seismic impacts on transportation networks.
for modeling of any scenarios. It suffers from difficulties to reconstruct texture-less areas of scenes like walls or window glasses and the inability to generate point clouds of scenes that require high densities. Based on the categor- ies of the error sources - systematic error due to camera factors and systematic error due to poor planning of cam- era network geometry, it is crucial to take appropriate strategies to minimize the different types of errors. For the systematic error due to camera factors, it is suggested as: calibrating the camera with professional calibration tools before using it in the field; trying to use single-lens reflex (SLR) or better cameras that produce high quality of pho- tos; and using prime lenses instead of zoom lenses when- ever possible. For the systematic error due to poor planning of camera network geometry, there are a serious of rules of thumbs that exist such as increasing the overlap of the photo coverage, keeping the baseline as large as possible, and constraining the intersection angles be- tween 60° and 90°. However, the ideal setups based on these rules of thumbs are sometimes hard to be imple- mentable in the field thanks to the practical constrains that exist. Therefore, an automatic guidance system that is field-deployable would be favorable. This will natur- ally become the extension of the future study. Moreover, an analytical procedure is desirable that is capable of characterizing the quantitative relationships between the data accuracy and the affecting factors (e.g., camera type, image resolution) for achieving a level of desired accur- acy. The quantitative relationships will be especially helpful to determine data collection settings for a par- ticular scenario.
There are a few challenges and practical considerations associated with modelingtransportation network redundancy. Adding redundancy to create more alternatives for travelers could involve not only routes but also travel modes. In addition, multiple travel modes within the system could increase the redundancy by providing substitutions to maintain transport service if one or more modes are disturbed by disruptions. For example in the 1994 Northridge earthquake in California, the transit system helped to alleviate the initial congestion in the Los Angeles highway network. During the Interstate freeway reconstruction, transit usage was tripled on rail and bus lines; however, it was reduced to the pre-earthquake level one year after the disruption (Deblasio et al., 2003). Hence, the redundancy measure should consider the flexibility of travel alternatives as well as the behavioral response of users in the event of a disruption. However, the alternative diversity alone may not be a sufficient measure of network redundancy as it lacks interactions between transport demand and supply. Capacity is not explicitly considered in the evaluation of travel alternatives (i.e., mode and route). It is hence necessary to include network capacity in measuring transportation network redundancy. Evaluating the network-wide capacity is not trivial. Multiple origin-destination (O-D) pairs exist and the demands between different O-D pairs are not exchangeable or substitutable. The network-wide capacity is not just a simple sum of the individual link capacities. Also, mode and route choice behaviors have to be considered in estimating the multi-modal network capacity. Disruption on an auto link may increase the travel time of auto mode or even change the availability of auto mode. This may further lead to flow shift between modes, changing the multi-modal network capacity.
applications of different intelligent transportation systems to ensure safe driving. These applications include Adaptive Cruise Control (ACC) (Potts & Okurowski, 1995), Antilock Brake System (ABS) (Lin & Hsu, 2003), Collision Warning System (CWS) (Bella & Russo, 2011), the modeling of Time-To-Collision (TTC) (Farah et al., 2009; Kiefer et al., 2006), an intelligent data fusion system using vision/GPS sensing (Chang et al., 2009) and on-board safety monitoring system (Horrey & Lesch, 2011). Moreover, especially for the highway predictive accident model, statistical methods have been frequently developed using approaches such as multivariate analysis, empirical Bayes, fuzzy logic and artificial neural networks (ANNs). These approaches are utilized for various purposes such as establishing relationships between variables, screening covariates and predicting values. Chueh (1996) developed the multi-linear regression model which can give a negative number or a zero accident number, which leads to a fault indication of absolute safety. Shankar et al. (1995) developed the accident frequency prediction model by incorporating geometric variables which are horizontal alignment, vertical alignment and environmental factors such as rainfall, number of rainy days, snowfall. Greibe (2003) and Caliendo et al. (2007) proposed the crash prediction model for urban areas and multilane roads in Italy, which used the Poisson regression model, Negative Binomial (NB) regression model and Negative Multinomial regression model.
The prototype testing of this new magnetic sealing mechanism indicated that this sealing mechanism can significantly reduce the leakage problem in reciprocating machines including compressors and oil pollution in cryogenic regenerator. It also shows that this sealing mechanism can replace the oil separation system in re- frigerating compressors. Through the prototype tests, the sealing function of this new mechanism is better than regular rubber seal, diaphragm seal, corrugated pipe seal and magnetic fluid seal.
, scaled-down earthquake laboratory experiments within a centrifuge improved sensor technology permit the measurement of an increased number of participants at higher sampling rates. Finally, earthquake simulations produce more and more data because of more elaborate simulation techniques. All these improvements in measurement technology lead to large, high-dimensional data sets. Visualizing these data is very useful to get new insights into the problems involved. The visualizations themselves are based on improved or newly developed visualization techniques like volume modeling, feature detection and visualization, etc. In the issue of OASIcs - OpenAccess Series in Informatics we present the results of the annual workshop of this IRTG held in Bodega Marine Laboratory, Bodega Bay, California, U.S, March 19th to March 21st 2010. Aim of the workshop was to bring together all project partners, PHD students and advisors to report on the different research projects. After three days of presentations and discussions the graduates spent their time on writing papers that cover the outcome of the program and give surveys on related topics.
of microscopic behavior models, includes car-following, lane-changing, and other in- dividual driver behavior models can be used to construct a simulation to investigate multi-lane traffic flow dynamics as shown by Hodas and Jagota (2003). Many well- developed microscopic traffic simulation tools are available for research and design of transportationsystem, such as Aimsun (TSS-Transport Simulation Systems (2010)), MITSimLab (MIT Intelligent Transportation Systems Program (2010)), PARAMICS (Quadstone Paramics Ltd (2011)), VISSIM (PTV Planung Transport Verkehr AG (2011)), and POLARIS(Auld et al. (2015)). Microscopic simulation tools have been used for wide range applications in network design, analysis of transportation prob- lems, and the evaluation of ITS and traffic management strategies. One benefit of having simulation model is that it provides traffic information at both transporta- tion system level and individual vehicle level, so it can be used to experiment traffic management strategy and to evaluate its impact to the entire network and individu- als. Microscopic traffic simulation with appropriate assumption and calibration can be used to validate traffic congestion model (Kurihara et al. (2009)) and to observe macroscopic phenomena in order to understand the influence of individual driver’s behavior (Goldbach et al. (2000)). Chu et al. (2011) used real traffic data and mi- croscopic simulation model to reconstruct highway traffic as a platform to validate a stochastic macroscopic traffic flow model. However, when microscopic simulation can be considered as an alternative of field experiment and empirical data, it requires a lot of effort to tune the parameters in the simulation in order to represent the actual traffic.
One common argument against the use of DLTs is that there are more efficient and better performing technologies for storing and managing large amounts of data such as the highly-scalable distributed database system Cassandra. There are bene- fits and challenges with both solutions approaches. However, the use of DLTs does more than just provide decentralized execution of transactions and storage of data, it also provides means for decentralized governance. This means that no single ad- ministrator has the credentials to modify the state of the network on its own by either attempting to tamper data or make changes to the protocol. Membership in a consortium network means that all protocol modifications are reviewed and accepted by the participants and all changes are logged in the blockchain (e.g., membership, chaincode, create/update/delete). The ability to maintain a log of all transactions in the network along with the identities of the individuals submitting them allows for non-repudiation of actions by all participants of the network, a feature often sought in critical information systems.
The realization of such control techniques requires appropriate communication architectures. The two-way communications will enable reliable interaction between the grid and the drivers. At a minimum, mobile drivers need to access charging station location and pricing information. However, spatio-temporal variations in the customer demand may create instabilities between charging stations. Communication networks will allow grid operators to interact with customers to balance the load. This can be done by offering customers incentives to drive extra miles to neighboring stations. For garage charging applications, time-differentiated tariffs can motivate customers to charge during off-peak hours. However, implementation of such pricing scheme requires EVs to be fully equipped with the required communication modules. Smart grid appli- cations for EVs also allow reverse power transfer (Vehicle-to-Grid (V2G)). Groups of stationary EVs can sell part of their stored energy during peak hours to alleviate the stress on the grid. Auction-based energy trading can be enable with integration of two-way communications .
Big Data-driven transportationengineering has the potential to improve utilization of road in- frastructure, decrease traffic fatalities, improve fuel consumption, decrease construction worker injuries, among others. Despite these benefits, research on Big Data-driven transportation engi- neering is difficult today due to computational expertise required to get started. This work pro- poses BoaT, a transportation-specific programming language, and its Big Data infrastructure that is aimed at decreasing this barrier to entry. Our evaluation that uses over two dozen research questions from six categories show that research is easier to realize as a BoaT computer program, an order of magnitude faster when this program is run, and exhibits 12-14x decrease in storage requirements.
This paper is further structured as follows. The next section briey introduces two existing OMS bench- marks which have not been dened for software en- gineering applications. Their appropriateness for soft- ware engineeringapplications is discussed in Section 3. Section 4 presents our approach of application specic benchmarks and illustrates the approach by describing a dedicated benchmark to support the selection of an OMS as the basis for the development of a particular SDE. Section 5 describes major aspects of the imple- mentation of benchmarks. Section 6 concludes the pa- per by sketching some general lessons learned about using OMSs from the implementations of a number of benchmarks on top of dierent OMSs.
Vehicular Navigation, Control, and Location: Recent advances in technology provide some significant new opportunities in the vehicular control area and, by extension, to the entire area of vehicle operations. New sensors and control procedures make continuous monitoring of locations possible and even introduce the possibility of widespread automated vehicle control. For example, several railroads have installed NAVSTAR satellite receivers so that precise locations of all locomotives are available at all times. Transit agencies have experimented with passive signpost systems to provide similar information on bus locations. The use of autonomous ground vehicles has become economical in applications such as warehousing and factory materials movement. With these hardware developments, a variety of
While transportation systems have always connected people, they are now also being asked to connect ecosystems. In Ohio, upstream fish passage is not only necessary for sustainable fish populations but it also plays an important role in endangered mussel species reproduction. Because engineers have only recently been designing culverts to account for fish passage, many previous assumptions are no longer valid. One of these assumptions is that water flowing through culverts always fills the pipe. Previously, culverts were designed for large floods which do result in full pipes. Ecologically relevant flows, however, are more moderate and result in partially filled culverts. One of the major results associated with variable flow depths inside the culvert is that the culvert