However, not many studies have focused on investigating the existence (or otherwise) of LRD within finer-grained end-to- end performance metrics at the level of individual flows. A few studies have reported non-stationary LRD in round-trip delay of synthetic UDP traffic , and in aggregate NTP/UDP flows , yet the burstiness preservation implications on the different transport mechanisms, the diverse physical infrastructures, and the unidirectional contributors of the per- packet end-to-end delay of individual flows have not been investigated, partly due to the absence of adequate and ubiquitous instrumentation mechanisms. The dynamic operation of transport-level flow control algorithms and the temporal resource contention that can result in congestion and packet loss along the end-to-end path are among the primary parameters influencing the unidirectional delay behaviour and imposing high variability, ultimately affecting the applications throughput. Presence of LRD in intraflow packet delay would imply bursty and unpredictably variable end-to-end performance. At the same time, the increasing popularity of wireless local and wide area network technologies can itself introduce highly variant performance. Fluctuations in radio channel quality of W-LANs as well as the link-layer reliability mechanisms employed by W-WANs have already been reported as factors of increased variability in end-to-end packet delay . Therefore, quantifying the intensity and longevity of such burstiness can prove useful for models capturing application behaviour that can then take this phenomenon into consideration while optimising application-level performance parameters.
(LRD) for communications network traffic can be measured using the Hurst parameter. LRD characteristics in computer networks, however, present a fundamentally different set of problems in research towards the future of network design. There are various estimators of the Hurst parameter, which differ in the reliability of their results. Getting robust and reliable estimators can help to improve traffic characterization, performance modelling, planning and engineering of real networks. Earlier research  introduced an estimator called the Hurst Exponent from the Autocorrelation Function (HEAF) and it was shown why lag 2 in HEAF (i.e. HEAF (2)) is considered when estimating LRD of network traffic. This paper considers the robustness of HEAF(2) when estimating the Hurst parameter of data traffic (e.g. packet sequences) with outliers.
This chapter sets out with definitions of self-similarity, LRD, slowly decaying variance, and the heavy-tailed property in relation to traffic modeling. We then presented a survey of related works on existing traffic models, by tracing the development from traditional traffic models like Poisson and Markov-based models to self-similar models. Although the former models have been widely popular even till today because of their relative simplicity, traffic measurements over the past seven to eight years have shown that network traffic data were, in fact, not adequately modeled using such traditional models. These research studies also point to the importance of LRD in network traffic. Such studies have shown that packet loss and delay behavior is radically different in simulations using real traffic data rather than traditional Markovian-based network models. These findings have led to the development of self-similar models which have been successful, to a certain extent, to explain LRD and self- similarity behavior in network traffic. In this respect, the two more widely used self-similar models are FGN and fractional ARIMA.
LRWSs were configured to intersect the laser beams above a 116-meter met mast (reference mast), where a cup anemometer and wind vane are mounted. Since low elevation angles were used to direct the laser beams (< 6 ◦ ), the vertical wind speed could be neglected, and the horizontal wind speed and direction were directly resolved using two independent LOS measurement. At the same time, the third LRWS was configured to perform 60 ◦ PPI scans (sector-scans) above the mast, where iVAP technique  was applied on the LOS measurements to retrieve the horizontal wind speed and wind direction. The distance between the three LRWSs and reference mast ranged from 1.1 km (one staring LRWS and one sector-scan LRWS) to 1.6 km (one staring LRWS). The horizontal wind speeds and wind directions retrieved in the single- and dual- Doppler configurations compared well with the measurements acquired with the top mounted cup anemometer and wind vane (Figure 9). Overall, the dual-Doppler results show generally less scatter in retrieving the horizontal wind speed and wind direction compared to the single-Doppler results. Nevertheless, the single-Doppler results indicate that single sector-scanning lidar seems to be a cost-effective solution for the flat coastal and near-shore wind resource assessment. However, it should be noted that due to the presence of the wind turbines at Høvsøre only the single-Doppler retrievals acquired during the wind conditions in which the wind direction was between 118 ◦ and 270 ◦ were analyzed. For this range of wind directions the flow was never orthogonal to the scanned sector, but roughly parallel to it. The sector-scan configuration, specifically an optimum sector size, has been further studied in  based on the collected data. The study indicated that in case of the layout used in this experiment the accuracy of the single-Doppler retrievals was deteriorating when the sector size was smaller than 30 ◦ . On the other hand the accuracy of the single-Doppler retrievals did not differ significantly for the sector size from 30 ◦ to 60 ◦ degree. The study proposed that an optimum sector size is in the range of 30 ◦ to 38 ◦ .
With respect to coastal landscapes, it has been suggested that barrier shorelines are scale independent, such that the wave number spectrum of shoreline variation can be approx- imated by a power law at alongshore scales from tens of me- ters to several kilometers (Lazarus et al., 2011; Tebbens et al., 2002). However, recent findings by Houser et al. (2015) suggest that the beach–dune morphology of barrier islands in Florida and Texas is scale dependent and that morphody- namic processes operating at swash (0–50 m) and surf-zone (< 1000 m) scales are different than the processes operating at larger scales. In this context, scale dependence implies that a certain number of different processes are simultaneously op- erative, each process acting at its own scale of influence, and it is the superposition of the effects of these multiple pro- cesses that shapes the overall behavior and shoreline mor- phology. This means that shorelines may have different pat- terns of irregularity alongshore with respect to barrier island geomorphology, which has important implications for ana- lyzing long-term shoreline retreat and island transgression. Lazarus et al. (2011) point out that deviations from power- law scaling at larger spatial scales (tens of kilometers) em- phasizes the need for more studies that investigate large-scale shoreline change. While coastal terrains might not satisfy the strict definition of self-similarity, it is reasonable to expect them to exhibit long-range dependence (LRD). LRD pertains to signals in which the correlation among observations de- cays like a power law with separation, i.e., much slower than one would expect from independent observations or those that can be explained by a short-memory process, such as an autoregressive moving average (ARMA) with small (p, q) (Beran, 1994; Doukhan et al., 2003).
We propose a testing procedure based on the Wilcoxon two-sample test statistic in order to test for change-points in the mean of long-rangedependent data. We show that the corresponding self-normalized test statistic converges in distribution to a non-degenerate limit under the hypothesis that no change occurred and that it diverges to infinity under the alternative of a change-point with constant height. Furthermore, we derive the asymptotic distribution of the self-normalized Wilcoxon test statistic under local alternatives, that is under the assumption that the height of the level shift decreases as the sample size increases. Regarding the finite sample performance, simulation results confirm that the self-normalized Wilcoxon test yields a consistent discrimination between hypothesis and alternative and that its empirical size is already close to the significance level for moderate sample sizes.
Detection. The middlebox receives the SSL-encrypted traf- fic and the encrypted tokens. The detect module will search for matchings between the encrypted rules and the encrypted tokens using BlindBox Detect (Sec. 3.2). If there is a match, one can choose the same actions as in a regular (unencrypted IDS) such as drop the packet, stop the connection, or notify an administrator. After completing detection, MB forwards the SSL traffic and the encrypted tokens to the sender. Receiving traffic. Two actions happen at the receiver. First, the receiver decrypts and authenticates the traffic using reg- ular SSL. Second, the receiver checks that the encrypted to- kens were encrypted properly by the sender. Recall that, in our threat model, one endpoint may be malicious – this end- point could try to cheat by not encrypting the tokens cor- rectly or by encrypting only a subset of the tokens to eschew detection at the middlebox. Since we assume that at least one endpoint is honest, such verification will prevent this attack. Because BlindBox only supports attack rules at the HTTP application layer, this check is sufficient to prevent evasion. Almost all the rules in our datasets were in this category. Nonetheless, it is worth noting that, if an IDS were to sup- port rules that detected attacks on the client driver or NIC – before verification –, an attacker could evade detection by not tokenizing.
expenditure projections, future cash flows and fund balances. Long-range forecasting does not present a 5-year budget or plan for the Town. It is not a static document to be approved and placed on a shelf. Forecast models are not absolute predictions of the future, instead are projections of possible outcomes based on a set of known variables and assumptions to evaluate annual decisions. The organization achieves a long-term financial goal of sustainability and serves the main financial goals of flexibility, efficiency, risk management, sufficiency, and credibility. The application of such analytical long-range forecasting principals contributes toward the following:
Section 3, some Monte-Carlo experiments are presented in order to support our theoretical claims. The Nile River data is studied as an application in Section 4. Section 5 is dedicated to the asymptotic properties of U -processes which are useful to establish the results of Section 2 in the long-range case. Sections 6 and 7 detail the proofs of the theoretical results stated in Section 2. Some concluding remarks are provided in Section 8.
the use of continuous-time long–rangedependent processes has become a common feature of many applications, especially in econometrics and finance (see Baillie and King 1996; Comte and Renault 1996, 1998). This is probably due to the following two reasons. The first one is that the class of continuous time stochastic processes most commonly employed in finance can be extended to encompass long-rangedependent models, which have already been used to model real financial data (see Comte and Renault 1998, p. 311). Existing studies show not only that this extension is possible, but also that it is the natural one in order to get variations (of prices or rates) which have an instantaneous variance of order less than two (but not necessarily integer). The usual short–range dependence case (diffusion processes) corresponds to the order one. This property is fundamental in the modern continuous–time finance theory (see Merton 1990, Chapter 1 for example) and corresponds to some kind of ’instantaneous unpredictability’ of asset prices in the sense of Sims (1984). The second reason is more statistical. Since existing studies (see Ding, Granger and Engle 1993; Ding and Granger 1996) already suggest that some financial derivatives (the Standard & Poor (SP) 500 stock market daily closing price index for example) display some kind of LRD property, existing studies for short–rangedependent processes are therefore not applicable to the LRD case.
In this light, we borrowed Tstat’s proposals and developed Skypeness , a high-performance Skype traffic classifier based on three intrinsic characteristics of Skype traffic, namely: delimited packet size, nearly constant packet interarrival times and bounded bitrate. Specifically, Skypeness computes the mean values of these three features (packet size, interarrival time and bitrate), averaging in windows of 10 packets, for each flow. If the ratio of packet windows whose mean values are inside of a given interval is greater than a given threshold, such flow is marked as Skype. For instance, Fig. 1 shows the appropriate interval and threshold values for audio Skype calls, specifically it shows the empirical cumulative distribution functions for packet size and interarrival time increments from 44 Skype audio calls when no sampling is applied (continuous line). Thus, packet size is well delimited (between 60 and 200 bytes more than 75% of the packets) and more than 60% of the interrarival increments are less than 15 ms. Table I shows all intervals and thresholds corresponding to the different classes of Skype traffic, namely, only audio calls, video (and audio) calls and file transfers. Note that the detector only considers UDP flows that have more than 30 packets (three packet windows). Skype typically uses only UDP as transport-layer because it is more suitable in real-time applications. However, it is uncommon but possible that Skype shifts to TCP in an attempt to evade firewalls or other similar restrictions. As we leverage on packet interarrivals assuming they are
Figure 4 shows five CSTs with varying f from 0 to 1 in a 35-node sensor network with fixed root node R at the center top position denoted by big dots in the sensor field. CSTs in Figures 4(a) and 4(e) are equivalent to SPT and MST respectively. We can observe that paths along SPT have a tendency to be straight toward R in Figure 4(a), and SPT links are longer than MST links in Figure 4(e). When f increases from 0 to 0.75, we notice only a few changes. Most changes occur in the range of 0 . 75 f 1 . Figures 5 to 8 draw the tree lengths using (5) for the entire range of f ( 0 f 1 ). Figures 5 to 7 indicate the tree lengths of SPT, MST and CST when the numbers of nodes are 25, 100 and 400 respectively. All curves in these figures decrease monotonously with increasing f because the tree length is short with a high f value due to (4). CST is always the shortest for all fs, and SPT and MST curves cross each other at f value of a little more than 0.9. The difference in tree lengths between SPT and MST cases are large at a large sensor network and a sparse sensor network.
Hysteresis loop curves are highly important for numerical simulations of materials deformation under cyclic loadings. The models mainly take account of only the tensile half of the stabilized cycle in hysteresis loop for identification of the constants which don’t vary with accumulation of plastic strain and strain range of the hysteresis loop. This approach may be quite erroneous particularly if the mean stress is not small and the effect of isotropic hardening is large. A strain dependent cyclic plasticity model which considers the variation of material constants versus strain range and accumulation of plastic strain has been proposed and experimentally investigated by the authors. In this paper it is proved that their proposed model is accurate for simulating all cycles of the hysteresis loop regardless of the strain range of the test. It is shown in this work that artificial neural network (ANN) model, if designed and trained properly, can be used for interpolating and extrapolating the experimental data. The results of this work are compared with two well-known cyclic plasticity models. The results also indicate that there is a remarkable agreement between the proposed model and ANN within and outside the strain ranges used in the experiments.
A packet analyzer is also identified as a network analyzer, protocol analyzer or packet sniffer or for specific categories of networks, an Ethernet sniffer or wireless sniffer. It is a computer program that can capture and log traffic that passes over a digital network or portion of a network. As data streams flow transversely the network, the sniffer captures each packet and, if needed, decodes the packet's raw data, showing the values of various fields in the packet, and analyses its content.
In this paper, we develop the delay-range-dependent stability criterion for delayed systems. The Jensen’s integral inequality, together with the reciprocally convex lemma, was employed and the derivative of the LKF was estimated more tightly. As a result, a novel stability criterion is derived and as by-product a delay-rate-independent criterion is also obtained. Two numerical examples are provided to substantiate the validity of the proposed method.
This is the peer reviewed version of the following article: Altwegg, R., Collingham, Y. C., Erni, B. and Huntley, B. (2013), Density-dependent dispersal and the speed of range expansions. Diversity and Distributions, 19 (1): 6068, which has been published in nal form at http://dx.doi.org/10.1111/j.1472-4642.2012.00943.x. This article may be used for non-commercial purposes in accordance With Wiley Terms and Conditions for self-archiving.
a. The employment ranges for the LRSU missions depend on METT-T, operational tempo, and support considerations. In a fast-paced battlefield environment, the depth of LRSU employment is greater because the area of interest is larger. Long-range surveillance detachment teams operate forward of battalion reconnaissance teams and cavalry scouts in the division area of interest. The long-range surveillance company teams operate forward of the LRSD teams and behind most special operations forces. (See Table 1-1.) The duration of an LRS mission depends on equipment and supplies the team must carry, movement distance to the objective area, and resupply availability. LRSU teams normally operate up to seven days without resupply depending on terrain and weather. Teams may be deployed longer in special cases. Operations other than war are likely to be nonlinear, with no identifiable forward line of own troops. Surveillance must extend in all directions. Deployment considerations are adjusted with the political and geographical effects included. The specific area of operations changes as additional maneuver units are sent into the area of operations.
observation is that at the higher flow thresholds the number of observed flows decreases, adding to the decrease in the forecast skill. In general, it appears that it is not possible to forecast the larger (extreme) flow events 12 months ahead. Rather, it is possible to predict wetter or dryer than average periods. Forecast skill tends to be higher for the 12 month ahead forecast as compared to the 6 month ahead forecast. This suggests the importance of the seasonal term in the fore- cast. Given that forecasts of streamflow are generally bet- ter than rainfall, our findings support findings of Westra and Sharma (2010) who show that global SST explain to explain a small percentage of rainfall variability at lags of 12 months. From Table 6 and Eq. (5) it is possible to derive a forecast monthly flow duration curve 12 months ahead in time by gen- erating regularly spaced flow threshold values up to a max- imum threshold, say the maximum recorded flow (Fig. 6). The advantage of presenting forecasts as a flow duration curve is that they are already used by water managers to de- termine water extraction rates, irrigators for irrigation plan- ning and by biologists to determine environmental flows (Acreman, 2005; Cigizoglu and Bayazit, 2000). Aside from the Thomson River, the forecast probability of flow is sys- tematically overestimated for the other river systems. One reason for this is that the Box-Cox t distribution is not cap- turing all the skewness in these datasets and thus cannot generate the full range of probabilities. One potential solu- tion is to use mixture distributions for the streamflow inten- sity (Stasinopoulos and Rigby, 2007) but this is not explored further.
the drivers, who have already been behind the stop-line, are allowed to continue to go, d) the yellow flashing signal means that drivers are allowed to go, but must pay attentions”. However, in the traffic engineering point of view, item c) above is incorrect because, in practice, when the yellow signal turns on, there are still drivers, who could not stop, have to continue to cross the stop-line. Until November of 2007, the Vietnamese-German Symposium on Traffic Signal Control was organized by the University of Transport and Communications of Vietnam (UTC) to discuss comprehensively about traffic signals, and to introduce the translation of RiLSA (edition 1992) in Vietnamese language. Then, this Vietnamese version became a reference for Vietnamese traffic engineers. Both Nguyen, Q.T (the professor in UTC) and Nguyen, Q.D (the professor in the University of Construction of Vietnam), in this symposium, asserted that it was very necessary to research and establish the Guidelines for Traffic Signals in Vietnam.