Pheromones are the chemicals used for communication. Pheromone communication is conceivably the most ancient means of communication in the animal realm. The animals transmit the chemical messages that include information regarding food location, territory, threats, and companions. The chemical messages are private, potent in darkness, and long-term, and they may function over long distances. Here, we specifically consider pheromones as used by termites. Termites use pheromones for communication purposes. When termites search for food, they use a pheromone to mark the track from the food to their shell. The termites deposit the pheromone from a gland on the bottom of their stomachs. When termites are back at the nest, they recruit other termites to follow the trail of the pheromone back to the food. Based on the idea of termites’ use of pheromones, we introduce the pheromone termite model with new features added for routing purposes. Our pheromone termite model adds two additional new features into the existing working process of the pheromone termite model: the packetgenerationrate and pheromone sensitivity. The packetgenerationrate helps avoid congestion, and the pheromone sensitivity helps determine the link capacity prior to sending the packets on each link. These two additional features provide a tradeoff between fault tolerance and QoS provisioning.
This paper explores traffic dynamics and performance of complex networks. Complex networks of various structures are studied. We use node betweenness centrality, network polarization, and average path length to capture the structural characteristics of a network. Network throughput, delay, and packet loss are used as network performance measures. We investigate how internal traffic, through put, delay, and packet loss change as a function of packetgenerationrate, network structure, queue type, and queuing discipline through simulation. Three network states are classified. Further, our work reveals that the parameters chosen to reflect network structure, including node betweenness centrality, net- work polarization, and average path length, play important roles in different states of the underlying networks.
the study conducted in  has been extended to account for the effects of interference on each of the protocols. Figure 2 and Figure 3 demonstrate the Goodput per Joule versus system packetgeneration rates for two differ- ent values of jammer packet inter-arrival time. The jammer node’s packet inter-arrival times are as follows: 0.05 and 0.01 seconds. These two values for the jammer node have been chosen because OPNET simulations indicate that these values result in significant effect on the network, thus taking into account a worst-case scenario analy- sis. ZigBee demonstrates negative slopes in each of the two figures; while Wi-Fi and Low Power Wi-Fi have positive slopes. The reason for the positive slopes can be attributed to the sleep energy. Specifically, observing one cycle of any sensor node, it is evident that the node spends most of the cycle sleeping and only a small frac- tion of the cycle transmitting/receiving. Although the sleep current is much smaller than the transmitting and re- ceiving currents it is however multiplied by a longer time period according to (1) which in turn makes the sleep energy constitute the dominant term in the energy summation. Accordingly, as the packetgenerationrate in- creases, the node sends more data over a fixed period of time resulting in a smaller sleep time. Examining Equa- tion (6) further clarifies this behavior. As the packetgenerationrate increases, the total data sent along with the energy consumed increase. However, the rate at which energy consumption increases is smaller since the de- crease in sleep energy counteracts the increase in transmission and reception energy. Therefore, effectively, the Goodput per Joule increases as the packetgenerationrate increases, thus explaining the positive slopes observed in Wi-Fi and Low Power Wi-Fi. As for the performance deterioration of ZigBee, this can be attributed to the amount of data dropped by the ZigBee nodes. As previously illustrated in , nodes implementing the ZigBee protocol experience dropped data, even in the absence of noise, due to their low transmission rates.
Figure 5 is a measure of the variation in bit rate obtained by individual hosts during a test. (The curves are so hard to distinguish that labelling them would have been pointless.) As the number of hosts increases, the fairness increases. The unfairness for small N is due to the intrin- sic unfairness of the Ethernet backoff algorithm. As Almes and Lazowska  point out, the longer a host has already been waiting, the longer it is likely to delay before attempting to trans- mit. When N = 3, for example, there is a high probability that one host will continually defer to the other two for several collision resolution cycles, and the measured standard deviation be- comes a sizeable fraction of the mean bit rate.
The first flaw was the introduction of latency while the file is buffered with file size limitations. Firewall vendors have worked around this issue by sending keep-alive packets to prevent this, yet the overall effect is the introduction of latency. The use of memory to buffer files for inspection causes not only additional latency but also a space issue which is addressed by limiting the overall file size to a preset amount (generally 100MB). The use of the Internet is growing and sharing of larger files is increasing; hybrid SPI/malware detection technology does not scale. The second flaw was that traditional point solutions were difficult to deploy, manage and update, increasing operating complexity and overhead costs. Sophisticated malicious attacks penetrate traditional stateful packet inspection prod- ucts. These solutions simply do not provide sufficient, timely and unified protection against increasingly complex threats. To overcome these flaws, Dell SonicWALL offers the most effective, highest-performance NGFW solutions available today. Recently, NSS Labs conducted independent testing of the Dell SonicWALL’s Next-Generation Firewall at their labs facility in Austin, Texas.
By observing the fitting curve, we can find that packet loss rate and the user’s quality of experience QoE presents a nonlinear relationship. For src13_hrc1_525. yuv video, make the fitting curve of the scatter plot, wired and wireless environments respectively using PSNR method and MOS method quantitative QoE’s performance index as follows in Table 8 and Table 9. The evaluation indexes are mainly R-square, RMSE (Root Mean Square Error, RMSE), SSE, SROCC (Spearman Rank Order Correlation Coefficient, SROCC), Pearson, OR (Outlier Ratio), Spearman, and these values are between 0 and 1. Which R-Square is called the adjusted coefficient of determination, the greater the valuen is, believe that the better the fitting effect of the model. RMSE is called the root mean square error, it is a kind of numerical indicators measuring accuracy of measurement, the smaller this value is, believe that the better fitting effect of the model. SSE is the sum of squared residuals, it is also a kind of numerical indicators measuring accuracy of measurement, the smaller this value is, believe that the better fitting effect of the model. SROCC is the spearman correlation coefficient (SROCC) between objective and subjective score, used to detect the monotonicity of model prediction, the greater the coefficient is, the change trend increasingly relevant, methods are more good. Pearson correlation coefficient is used to measure whether the two data sets on a line, it is used to measure linear relationship between the distance variables, the greater the correlation coefficient, the stronger the correlation. OR said out rate, a measure of the stability of model prediction, this value is small, the better the performance of the model. The greater the Spearman coefficient, the better the performance of the model.
In the information technology era, there are relentless needs for networks with extremely high capacity. Due to the increasing internet usage and upsurge in users, the end users were longing for the revolution in the network. At this juncture to make the dreams of the network users come true, tremendous research has resulted in optical technology which is a great boon to the world of communication. Optical Packet Switching (OPS) presents new avenues for high speed interconnection networks. There is a rapidly growing demand for high-throughput networks to transmit heterogeneous traffic services such as communication of voice, images and data, multimedia interaction and advanced digital service which warranted OPS. These heterogeneous traffic services in OPS networks require appropriate treatments (e.g. delay or bandwidth guarantees). Quality of Service (QoS) is the only way through which optimization can be attained in the OPS. OPS networks place a particular attention to the QoS provisioning problem from end-to-end perspective. QoS provisioning is therefore a mandatory task in optical packet switching networks. The goal of QoS is to provide guarantees on the ability of a network to deliver predictable results. A network monitoring system must typically be deployed as part of QoS, to insure that networks are performing at the desired level. Definitely, optical networks with improved QoS capabilities are the main goal for the next generation telecommunication infrastructure. One of the proper ways of improving QoS is enhancing the network parameters in each layer. The major parameters that affect the performance of QoS in OPS are: (1) Bit Error Rate (BER), (2) Packet Loss Rate (PLR), (3) Bandwidth, (4) Delay, (5) Jitter, (6) Recovery time, (7) Response time, (8) Reliability, (9) Availability, (10) Fault tolerance and (11) Throughput and so on.
Automatic Test PacketGeneration type tools are in dire need in checking liveliness of the network. The developed tool works in dynamic way of checking liveliness by testing all the rules, thereby achieving the reachable policies. In case of any faults during the information exchange, a fault localization algorithm is used for detecting the failed links across the nodes in the system. The execution likewise expanded testing with a straightforward fault localization algorithm which is additionally developed by utilizing the header space structure. As in programming testing, the formal model assisted to increase the testing analysis thereby decreasing the number of test packets. The outcome generated displays that every link in the network can be operated with limited amount of packets . One can trust the developed project will be just as helpful for robotized element testing of complex systems since it can minimize the data loss by sending the original message after checking the network links and devices with test packets.
The tool Cbench is a prominent benchmark for SDN controller performance measurements. It is generally utilized to measure the performance of a SDN controller with the following steps: (a) simulating a network topology with a set number of OpenFlow switches and PC hosts; (b) generating a sequence of packet-in messages from each switch to the controller, and capturing their responses, i.e., packet-out or flow-mod messages; (c) recording the respective statistical information, and calculating the performance parameters of the controller.
also maintains a list of precursors that may be forwarding packets on this route. These precursors will receive notifications from the node in the event of detection of the loss of the next hop link. The list of precursors in a routing table entry contains those neighboring nodes to which a route reply was generated or forwarded. Remove the lost neighbor from all the precursor lists. If the route is up, forward the packet. If it is the source of the packet, then do a Route Request. A local repair is in progress. Buffer the packet. If a packet is forwarded for someone else to which it don't have a route, drop the packet and send error upstream. Now the route errors are broadcast to upstream neighbors. If a valid route has expired, purge all packets from send buffer and invalidate the route. If the route is not expired, and there are packets in the sendbuffer waiting, forward them. If the route is down and if there is a packet for this destination waiting in the sendbuffer, then send out route request. SendRequest will check whether it is time to really send out request or not. In order to track direction of packet flow, direction_ in_hdr_cmn is used instead of incoming flag. For packet originating...* Add the IP Header ch->size() += IP_HDR_LEN; It can happen that a node received a packet that it sent. Probably it is a routing loop. Check the TTL. If it is zero, then discard. Time-to-live (TTL) is a value in an Internet Protocol (IP) packet that tells a network router whether or not the packet has been in the network too long and should be discarded. For a number of reasons, packets may not get delivered to their destination in a reasonable length of time.
The routing within networks, must satisfy the QoS metrics. In traditional data networks, routing is concerned on connectivity or cost. Routing protocols usually characterize the network with one or more metric(s). However, in order to support a wide range of QoS requirements, routing protocols need to have a more complex model. The network is characterized with multiple metrics such as bandwidth, delay, jitters, loss rate, authentication, security,…etc. This complex model necessitates a long time to proceed. The Rough Set Theory (RST) is applied to reduce these metrics successfully and decide the most effective ones. In this paper, RST is applied to reduce the on- line metrics that are reported by Routing Information Protocols (RIP). The paper represents information about network elements (links, or nodes) to obtain the Quality of Service (QoS) core . ROSETTA software is applied to deduce a QoS metric as a substitution for all routing metrics. This metric is used to select the optimal routes. The results confirm that the proposed metric is adequately suit for selecting the proper routes.
Mobile nodes are moving randomly without any centralized administration in MANET. Due to high mobility the packet loss occurs unnecessarily and the integrity of the packet is not genuine. In the presence of the attacks, the data is collapsed or damaged. In this paper, we have developed a Cross Layer frame work with security and scheduling mechanism for Authentication which attains the integrity and reduced congestion among nodes. In the first phase of the scheme, Cross Layer design is proposed. Here the information is sent to the source node depends upon the fading of channel determined from destination node. In second phase, scheduling algorithm with CDMA is proposed to achieve scheduling status and time slot allocation. In third phase, the Optimized Encryption/ Decryption Scheme is proposed to achieve high integrity and authentication. In future work, the energy consumption of mobile nodes will be minimized using the energy consumption model with delay approach. By using the extensive simulation results, the proposed scheme CSCAS achieves the better packet delivery ratio, authentication rate, scheduling rate, low packet delivery delay and overhead than the existing schemes namely USOR and CDJPCSA while varying the mobility, simulation time and number of nodes.
We have addressed the problem of optimal rate allocation for video streaming in wireless networks with user dynamics, and developed online, parsimonious algorithms with provable convergence properties and performance guarantees. In future work we plan to extend the work to account for capacity variations due to slow fading as caused by user mobility.
The depletion of conventional fossil fuels at a breakneck pace and upsurge in power demand along with power mar- ket deregulation has aided in the technical and commercial development of a new paradigm in the DG all around the globe. DG means interconnection of mini or micro on-site distributed energy resources (DERs) generation with the main grid at distribution voltage stage. DERs primarily incorporate renewable and non-conventional energy resources such as solar photovoltaic (PV), hydro, wind, tidal, fuel cell, etc. . Several energy market liberations and advancement in electronics and communication techniques have facilitated the operation of these geograph- ically dispersed DERs through improved SCADA. These interconnected DERs possess the capability of operating both on-grid as well as off-grid mode.
In case of TCP Westwood and Prairie, they estimate the available bandwidth, and adjust TCP’s transmission rate according to the estimated bandwidth. Although they have more signiﬁcant impact on improving TCP’s per- formance, these schemes could have a high possibility to cause more frequent burst losses and long go-back- N retransmissions as the rate of wireless transmission errors increases. This is because these schemes do not consider the packet loss rate when they adjust the trans- mission rate. In this article, we propose an enhanced TCP to dynamically adjust its transmission rate according to network conditions such as the available bandwidth and the loss rate. When an FRR/RTO is triggered, our scheme adjusts the transmission rate in proportion to the available bandwidth in order to quickly utilize the available band- width, and also readjusts it in inverse proportion to the loss rate in order to avoid burst losses and long go-back- N retransmissions. In addition, when successive RTOs are triggered, our scheme initializes the back-oﬀ value if the network is not congested in order to avoid a long idle time period of an RTO. By doing so, our scheme has signiﬁcant eﬀects to avoid the performance degradation caused by the congestion irrelative FRRs/RTOs in wireless networks.
of “Pop 2” was larger than 10 8 individuals and could not be re- duced to a biologically meaningful value, and the time of di- vergence between populations was around 50 ya (generation time of 5 y). Thus, the remaining scenarios were compared to assess the one that best fit the data. The highest PP and Bayes Factor (BF) (SI Appendix, Tables S7 and S8) were obtained for the second scenario involving one domestication mode with in- trogression from a wild unsampled source population. In all pairwise comparisons the second scenario had a higher proba- bility, with the BF ranging from ∼63 to ∼10 23 . The remaining comparisons had substantially smaller BF values, mostly lower than 1 (SI Appendix, Table S8). This endorsement of the second scenario mirrors recent studies in pigs and other livestock in which a model incorporating continuous gene flow between a wild and a domestic species was better supported than traditional East Africa
Similarly, we discuss the distribution and update of secret keys in MoteSec-Aware in this section. To keep the confi- dentiality of messages transmitted over the network, there are two types of keys, session keys (used for LN/FNs to broadcast packet to FNs/SNs) and pairwise keys (used for each pair of nodes), used in our system. Here, the session key is distributed in advance. After sensor deployment, pairwise keys are constructed for pairs of sensor nodes by applying our CARPY+ scheme . The advantage of CARPY+ is that it can establish a pairwise key between each pair of sensor nodes without needing any communication. This property is essential in constructing the CFA scheme, because establishing a key via communication incurs an authentication problem, leading to circular dependency. CARPY+ is also resilient to a large number of node compromises so that the complexity for breaking the CARPY+ scheme is Ω(2 +1 ), where is a security parameter independent of the number of sensor nodes. When updating the session keys, we customize stateless session keys update schemes, which organize one-way key chain to facilitate the authentication of future keys based on previous ones. In stateless session keys update scheme, network owner α uses the pairwise key K α,β shared with
VFA. As a result, a combination of leachate recirculation with proper initial-pH adjustments minimized inhibitory effects of acid accumulation and accelerated waste biodegradation [5, 14]. This technique of initial-pH control against inhibitory effects was employed in the present study. The properties of ash are important for its utilization in physical-chemical and biochemical processes such as the anaerobic biodegradation of organic waste. Ash has high initial-pH value ranging from 11.0 to 14.0 with average particle size of 230 µm and shows acid neutralization properties [15-17]. This suggested that the high initial-pH of ash could be of importance to the stabilization of VFA accumulation that results in rapid initial-pH decrease. Also, the highly porous carbon and other inorganic particles present in the ash react with water to form circular clusters believed to be responsible for the frothing phenomenon that is observed when ash is mixed with water [18, 19]. This frothing increases the surface area of the reactive material of the reaction medium [19-21]. In the present work, the frothing phenomenon of ash-loading was believed to have made ash-loading additive capable of rendering catalytic properties to the ABP. Consequently, ash-loading positively affected the anaerobic biochemical process as a source of inorganic nutrients and also offered increased surface area responsible for faster biochemical reactions. These properties resulted in modification of ABP by way of altering reaction pathways. This denoted that the carbon content present in ash-additive determined the reaction properties of media where ash-additive is present. Hence, ash-loading was believed to possess catalytic properties during the ABP of the present study. In general, the behaviour of ash-loading additive during the biodegradation process was comparable to its effect when added to acidic soils for fertility improvements in which the highly porous carbon content in ash offer increased surface area, adsorb odours and possess catalytic properties similar to that of activated carbon [22-23]. In view of the above, the sensitivity of biogas generationrate to ash-loading was investigated.
The SCFS algorithm takes the other extreme by designating as bad the link in the subtree that is most likely to be amongst the set of bad links, namely, the link k. For suppose, on the other hand, that k is not bad. Then all the path segments from k to the destinations in R(k) must be bad. This is, of course, possible, which is why we cannot pin down the bad links with certainty. However, if the rate of occurrence of bad links is sufficiently small, then, as we shall see, it is far more likely that the link k is bad. Anticipating this, we form an inference algorithm which designates link k to be bad and its entire descendant links good. Put another way, we estimate Zk by b Zk = 0, while for all links j with j ¹ k we estimate Zj by b Zj = 1.