both in the time domain and frequency domain. However, NM15 is the most anoma- lous. Thus, it appears that wave propagational effects are playing a dominant role in controlling these shapes and amplitude decays. We choose NM07 as the reference station. Such a choice is similar to analyzing waveforms recorded on basin sites rela- tive to that recorded on hard rock sites (?). We find it is not critical to our analysis since we are primarily interested in modeling the pattern of changes in travel time and amplitude across the transition region between the western Great Plains and the Rio Grande Rift. We pursue this strategy by generating synthetic waveforms from the above tomographic models and compare them directly with observations to evaluate their effectiveness. To demostrate the usefulness of including waveform and amplitude information, we focus first on the transition from the Great Plains to the rift zone where we expect the largest velocity constrast to occur and waveforms to be heavily distorted. In particular, as we will show later, the waveforms are sys- tematically distorted, broadened and the amplitude decreases accordingly. Instead of cross-correlation, we will pick travel time delays, measure waveform amplitude and use them as observables to compare with synthetic ones. Though other features shown in the tomographic image appear interesting as well, we will leave for future analysis. In this paper, we examine tangential SH broadband waveform first since S wave anoma- lies typically show much larger distortions than do P waves. In addition, we perform a suite of synthetic tests to demonstrate the usefulness of broadband waveform and amplitude information in deciphering the location, geometry, and depth extent of a subducting slab. We will further analyze P waveforms in a separate chapter and the joint results will provide important constraints on the physical state and composition of this interesting slab feature beneath the western edge of the cratonic Great Plains near the Rio Grande Rift.
The study of kinematic waveform inversion of strong- motion records has revealed a detailed source process and allowed a better understanding of seismic sources. In most cases, these source images represent the source pro- cess of low-frequency wave radiation or low-frequency source model because the theoretical Green’s function for strong-motion records is less reliable and because wave- form matching is difﬁcult for the frequency ranges higher than 1 Hz. In order to learn about the source physics and the asset it represents in terms of strong-motion prediction, recent studies have attempted to construct source models re- lated to broadband wave radiation (e.g., Zeng et al., 1993; Nakahara et al., 2002; Shiba and Irikura, 2005). One of the more successful approaches is the empirical Green’s func- tion (EGF) simulation with a simple source patch model characterized within the total rupture area (e.g., Kamae and Irikura, 1998; Miyake et al., 2003; Morikawa and Sasa- tani, 2004). The EGF method is a technique used to synthe- size seismic records by summing up the observed records of small earthquakes. It can therefore simulate realistic wave- forms up to high frequencies that are affected by minute heterogeneous propagation-path structures. The assumed source model consists of a (multiple) rectangular area(s) that has no explicit heterogeneity of slip, rise time, and rup- ture velocity inside of it. Miyake et al. (2003) named this area “strong motion generation area (SMGA).” The SMGA
of Japan. Analysis of various data sets (i.e. regional and teleseismic broadband seismographic net- works, geodetic networks, ocean-bottom measurements, etc.) demonstrates a unique and complex rupture pattern which indicates varying me- chanical and frictional properties along the thrust zone (Fujiwara et al., 2011; Ide et al., 2011; Ito et al., 2011; Simons et al., 2011; Wei et al., 2012; Yue and Lay, 2011). Especially intriguing is the difference of frequency content in radiated en- ergy emanating from various parts of the rupture zone. Huang et al., (2012) show that the high-frequency radiations in the deeper part are at least partially caused by asperities that have hosted earthquakes before, but the exact mechanism that has caused the variation of such energy concentration is currently unclear. Since the mainshock, the region has also experienced a sharp increase in seismic aftershock activity. These seismic events span a wide range in size and depth. Current national and international earthquake catalogs rely mainly on travel-time data to determine origin time and spatial location. Unfortunately, their results often do not agree, where the hypocentral locations of earthquakes reported by different networks can vary by over 20 km laterally, and by over 10 km vertically depending on the catalog (Zhan et al., 2012). Furthermore, due to the lack of regional stations on the Pacific side, the resolution decreases with distance from the Japan coast. Such discrepancy creates difficulty in constructing a coherent picture of the thrust zone.
One of the most fundamental achievements produced by the Vela program, i.e. a twenty-five year program, is that setting off large explosions at known locations and origin times demonstrated that the Earth is not PREM (Kerr, 1985). In particular, the Longshot experiment located on Amchitka Island reveals over 5 seconds anomalies in P-wave travel times along slab paths as discussed by Davies and Julian (1972). Early tomographic imaging suggest that such subduction is not simple in that some slabs flatten-out and some drop into the lower mantle, (Creager and Jordan, 1984), and more recently Obayashi et al. (2013) and Simmons et al. (2012). While these images and interpretations are generally consistent, they do not produce significant waveform distortions because the anomalies are too small to explain the waveform complexity (Figure 3.1) see Zhan et al. (2014a). Because of the difficulty in knowing the locations and origin times of off-shore events situated along the Pacific Basin, one can examine the waveform shapes directly between an outer-shore event vs. an event in the down-going slab. Obviously, those signatures are distinctly different even though the events are less than 300 km apart (Zhan et al., 2014a). By comparing such an event pair, one can ascribe all the differences in the observed waveform to the source side structure differences.
seconds, for both Rayleigh and Love waves. At 420 seconds they are also near zero for westward azimuths, for both wave types, although significantly different from zero in east- ward azimuths. The synthetics arrive as much as 20 seconds earlier than the data in east-ward azimuths for Rayleigh waves, and 40 seconds for Love waves. This general shape of the sinusoidal pattern of time shifts can be explained by a mislocation of the source. The baseline of the sinusoid is then indicative of the source delay relative to the estimated one, here around 15 seconds. The amplitude of the sinusoid is related to the mislocation of the source. This results show that the 400 seconds waves are consistent with a point source that occurs later and further west than the point source consistent with the 200 seconds waves. From this we can immediately expect some sort of asymmetric triangular source time function, with a rapid rise in slip near the epicenter, slowly falling off in time and towards the west. The Harvard CMT was constructed to fit mantle waves of 135 seconds and longer and does a very good job of matching the data at 207 seconds. It is interesting, however, that there is such a discrepancy for the longer period data, indicating that the source is not well matched by a point source at 135 seconds and longer.
Energy-aware algorithms are important factors for extending the lifetime of the wireless sensor network. In energy concerned fields, network clustering has proved to be an efficient technique that renders structures of low consumption. Yet, clustering protocols face a major issue that is of grouping sensor nodes in an optimal way. This is an NP-Hard problem which necessitates evolutionary algorithms in order to solve. In this paper, we explore a new hybrid optimization algorithm to decrease the energy consumption, in which modified particle swarm optimization and simulated annealing are combined to find the optimal clusters based on transmission distance. Simulated annealing is used as a local search around the best solutions of the modified particle swarm optimization. The simulation results show that our proposed protocol can improve the lifetime of systems compared with existing clustering protocols.
The algorithm stops when all the packets are transmitted. The effectiveness of the algorithm is proved by the value of the Standard Deviation between the TR (SDTR). Thus, in all this work, the value of SDTR is always less than 1. We expose in the next section, some previous work. In section 3, we present a CRN constituted by only one channel, one PU and several SUs. We propose a problem modeling in section 4 and expose our algorithm in section 5. In section 6, we develop some applications and finally we conclude in section 7. 2. RELATED WORKS
Source mechanisms are also used in a variety of stud- ies (e.g. tectonics, stress patterns, cluster analysis) aiming at characterizing an event or a set of events by the fault style. Several classifications of fault styles are available in the liter- ature (see e.g. Célérier, 2010, for an overview). For this work we adopt the classification proposed by Zoback (1992) for the World Stress Map Project (http://www.world-stress-map. org/, last access: April 2019). With such classification an event is assigned one of the following fault styles: thrust, nor- mal, strike–slip, normal with strike–slip component, thrust with strike–slip component, and undefined for events not fitting into any of the previous categories (similarly to the “odd” group in Frohlich, 1992). Figure 6 shows the annual number of earthquakes grouped according to Zoback (1992) and the sole intent of the figure is to showcase one of the possible uses for the source mechanisms in the ISC Bulletin. Considering the source mechanisms in the ISC Bulletin, as discussed so far, it is obvious that for events with only one mechanism available the fault style is easily assigned. The same also applies to events with multiple solutions all of the same fault style (e.g. http://www. isc.ac.uk/cgi-bin/web-db-v4?event_id=3021752&out_ format=IMS1.0&request=COMPREHENSIVE, last ac- cess: April 2019). However, for events with more than one solution it is not always possible to assign a fault style with the solutions at hand. This happens, for ex- ample, when an event has two source mechanisms and one being thrust and the other normal (e.g. http://www. isc.ac.uk/cgi-bin/web-db-v4?event_id=602214316&out_ format=IMS1.0&request=COMPREHENSIVE, last access: April 2019). Similarly, if an event has multiple solutions, we may have source mechanisms falling into more than two fault styles (e.g. http://www.isc.ac.uk/cgi-bin/web-db-v4? event_id=602431903&out_format=IMS1.0&request= COMPREHENSIVE, last access: April 2019) or without a unique maximum in the number of source mechanisms belonging to a fault style (e.g. http://www.isc.ac.uk/cgi-bin/ web-db-v4?event_id=602945524&out_format=IMS1. 0&request=COMPREHENSIVE, last access: April 2019). In such cases we do not assign a fault style to an event. If, in- stead, out of the fault style distribution within an event there is a more recurrent fault style (e.g. http://www.isc.ac.uk/ cgi-bin/web-db-v4?event_id=2944860&out_format=IMS1. 0&request=COMPREHENSIVE, last access: April 2019), we still assign a fault style to the event. Therefore, in Fig. 6 we also show the “Discrepant” category for those events where we could not assign a specific fault style. Note that complex earthquakes, such as the ones previously mentioned, may also fall into this category. The annual percentage of events falling into the “Discrepant” category is usually between 0 % and 5 %, with a maximum of 8 % in 2000. The occurrence of such “Discrepant” events should
It refers to simultaneous delivery of data from multiple sources to multiple receivers of different groups. Group communication leads to multiple paths from a tree structure. In the beginning the source is considered as a root of the multicast tree. A multicast tree is incrementally constructed as members leave and join a multicast group. When an existing member leaves the group, it sends a control message up the tree to prune the branch which has no members attached. The multicasting is used when a lot of information is transmitted to a subset of hosts. Later on a core based tree was introduced in which an intermediate node communicate to remaining down-stream nodes where the core node may also be a source node. The core based multicast tree construct the shortest path between the core and remaining nodes. In section 2, we outline the literature review. In section 3 we introduce SPAN/COST  algorithm and cost estimation. In section 4, we evaluate performance of simulation result. We conclude the paper in section 5.
A quintuple method has five primary components, which are entity (ei), aspects (aij) from an entity, orientation of opinion (ooijkl), opinion holder (hk), and time (tl). An entity e is a product, service, person, event, organization, or topic. The term entity is used to determine the target object that has been evaluated. An entity can have one component group (or a part of one) and an attribute group. Every component has sub- components (or parts) and an attribute group. An aspect from entity e is a component and attribute from e. The name of an aspect is given by the user, while the expression of the aspect is a word or phrase that actually appears in a text that reveals an aspect. The name of an entity is given by the user, while the expression of an entity is a word or phrase that appears in a text and reveals an entity. This research uses the topic the implementation of the national exam (UN) as an entity or object and for an aspect or component from the entity, like a problem, answer, passing, etc. The definition of an opinion holder is an individual or organization that expresses an opinion . To review news on online news media, the opinion holder usually writes by making postings. The opinion holder is more important in a news article that explicitly states an individual or organization has an opinion . The opinion holder is also called the source of the opinion .
of the website is the main goal of the proposed work. Many existing models focus on design of the website, whereas the framework presented in this paper mainly focus on the quality of the website. Various quality models and quality based feedback from multiple websites are taken as the source to build the framework. The websites that are created using this framework are evaluated to find the impact that it made on the user. Quality parameters such as usability, content, reliability, functionality, efficiency, etc. are used as metric to evaluate the constructed website. The evaluation results are used to carry out the modifications in the launched website. Once the modifications are taken successfully, then the updated version is launched.
The TLL 5000 board consists of a High Speed Video DAC ADV7125 from Analog Devices. The ADV7125 is a triple high speed, digital-to-analog converter on a single monolithic chip. It consists of three high speed, 8-bit video DACs with complementary outputs, a standard TTL input interface, and a high impedance, analog output current source. The ADV7125 has three separate 8-bit-wide input ports. The VGA Controller implemented in FPGA can input signals to the video DAC. The functional block diagram of ADV7125 as follows;
The UW Extension is pursuing a taxpayer-funded telecommunications project in the Chippewa Valley, Platteville, Superior and Wausau. They continually suggest that these projects are needed in those rural communities due to the lack of private sector broadband. Nothing could be further from the truth. The private sector competition in those four communities is very real. Unfortunately for the UW Extension, the facts tell a story of the private sector in Wisconsin doing what it does best: investing in our communities and providing service to our customers.