However, a certain amount of active nodes should exist to ensure a desired level of coverage at all times. A survey on the routing techniques and protocols for wireless sensor networks, including energy aware routing, can be found. Connectivity preservation, coverage maximization and energy saving can be performed in different ways. When dealing with real-world environments, however, some of these aspects can be prioritized over the others. It impacts the election of nodes to routing or sensing functions. For example, if a region of the monitored field can only be viewed by a unique sensor node. It is particularly relevant for sparsely covered areas, which can be created after deployment or after many nodes deaths or failures. On the other hand, some approaches can prejudice the coverage on behalf of the prolonging of the network lifetime. It could be performed by reducing the number of active sensors even if uncovered areas appears, increasing the number of redundant nodes.
In this section, we evaluate the performance of the proposed method. The most important metrics of efficiency are the network lifetime, followed by communication cost and convergence time of the algorithm. The network lifetime is defined as the time duration when a network initiates its mission until the remaining (live) sensors cannot cover all of the targets. For simplicity, we assume that the energy consumption by sensors is linear; that is, 20 units of energy per minute for sensing activity. The energy consumed for other activities like data communication, wake to sleep transition, and vice versa is assumed negligible. We also assume that each sensor during its activity in each cover set consumes 20% of its initial energy for monitoring, and then it goes into sleep (r = 0.2). The communication cost depends on the number of messages received by each node to acquire the information needed for algorithm. Convergence time is related to the number of algorithm iterations. Algorithms were simulated on MATLAB. N sensors and m targets were randomly and uniformly deployed in a two-dimensional 500 m × 500 m area. Sensing range is 100m, communication range is 200m, and the angle of view is 90 o . The proposed optimization algorithm (PSO) is compared to
A new approach based on spatial frequency for fusion of multi-focus images has been proposed in the DCT domain instead of the spatial domain. We evaluate the performance of the proposed method with various evaluation metrics and it is found that the performance of fusion in the DCT domain is superior to that of conventional approaches based on DCT and the state-of-the-art methods including DWT, SIDWT, and NSCT, in terms of visual quality and quantitative parameters. Moreover, the proposed method is simple to implement and computationally efficient when the source images are coded in JPEG format, especially in wireless visual sensor networks.
Wireless sensor networks are power constraint networks, having limited computational and energy resources. This makes them vulnerable enough to be attacked by any adversary deploying more resources than any individual node or base station, which may not be a tedious task for the attacker. As described earlier, a typical sensor network may be composed of potentially hundreds of nodes which may use broadcast or multicast transmission. This mode of transmission results in a large volume wireless network with many potential receivers of the transmitted information. This makes a number of attacks such as packet alteration or new packet insertion, capturing of node, reply attacks, denial of service and traffic analysis possible to be performed on any sensor network . Figure 2 is showing major attacks of WSN.
Sensor networks are emerging technologies currently being deployed in seismic monitoring, wild life studies, manufacturing and performance monitoring. A typical sensor network contains large number of densely deployed, tiny, low cost nodes that use wireless peer-to- peer network. These sensor nodes are densely deployed in a predetermined geographical area to self-organize into ad-hoc wireless networks to gather and aggregate data . They use multi-hop and cluster based routing algorithms and resources algorithms based on dynamic network topology . The characteristics of WSN are wireless medium, ability to withstand harsh environmental conditions, ability to cope with node failures, communication failures, heterogeneity of nodes, large scale of deployment, unattended operation, low power consumption, low cost and low data rate. Other characteristics of WSN are as follows:
In wireless sensor-actor networks, the sensors sense the surroundings and transmit the sensed data to the actors. The actor nodes respond collectively to achieve their purpose. Since the actors and sensors have to communicate at all times, a strong network topology has to be established. A failure of an actor may cause the network to be broken into two. The solution can be provided by moving actor node thus restoring connectivity. Current recovery schemes consider only single node failure. This paper overcomes this shortcoming by recovering from multiple node failures through Least-Disruptive topology Repair (LeDiR) algorithm. LeDiR algorithm depends on the local view of each node about its neighbor to find the recovery plan.
The Wireless Sensor Network application I have chosen is an Underwater Sensor Network developed by the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory in conjunction with the CSIRO Centre Laboratory in Australia. More than 70% of our planet is covered by water. The Underwater Sensor Network can be used for scientific exploration, commercial exploitation, coastline protection. They are envisioned to enable applications for oceanographic data collection, pollution monitoring, offshore exploration, disaster prevention, assisted navigation and tactical surveillance applications. The Underwater sensor network consists of static and mobile underwater sensor nodes. The features of the UWSN are:
The coordinates of the base station are (0,0). For small-scale networks, nine nodes are evenly distributed in 30m*30m, 40m*40m, 50m*50m and 60m*60m network areas. For large-scale networks, 100 nodes are evenly distributed in 100m*100m, 150m*150m and 200m*200m network areas. The packet delivery rate of the network are shown in Figure 2 and Figure 3. For networks of different sizes,
Considering that a WSN with N distributed sensors and a target, we assume that noise exists among the target and the sensors. The noise is independent and identically distributed (i.i.d) and follows the standard Gaussian distri- bution with zero mean and unit variance. As Fig. 1 shows, N distributed sensors are uniformly deployed in the ROI which is a square with area A2, and a target randomly appears in the ROI. The locations of the local sensors are unknown to the FC, and every local sensor first monitors the ROI, makes a local decision about the target’s presence or absence, and then sends its decision to the FC to make a global decision which decides whether the target exists.
Location data associated with wireless sensor networks represents critical information. Current developments in localization algorithms employed in industrial wireless sensor networks seek to adopt positioning methods that are more robust and stable, and more accurate and efficient while requiring a minimum of resources. Therefore, the present study capitalizes on the industrial wireless sensor network that has a strong correlation between applications to propose a positioning method design based on a Monte Carlo localization algorithm that is ideally suited to the narrow channel environment encountered in underground mining activities. The proposed positioning method offers very low computational complexity, which greatly reduces the use of network resources. Simulation experiments demonstrate that the proposed Monte Carlo localization algorithm provides strong stability and relatively high positioning accuracy.
Wireless sensor networks (WSNs) are employed in many applications including healthcare monitoring, environment monitoring, and battlefield surveillance. WSNs allow computer systems and to remotely interact with the outside world. WSNs will transform the way we manage our homes, industries, and environment. WSN comprises of spatially distributed sensor nodes that cooperatively monitor environmental conditions. Two types of WSN nodes are sink nodes and sensor nodes. While sensor nodes collect environmental information using sensors, sink nodes are in charge of link between the Internet and sensor nodes. Since sink nodes act as a gateway, it has powerful components for the high reliability. Conversely, sensor nodes are normally equipped with low-end components to reduce cost, as thousands of sensor nodes are needed for a WSN to provide the secure environment monitoring function . Sensor nodes are tiny devices with one or more sensors, transceiver, storage resources and actuators.
A wireless sensor network (WSN) can be employed in many application areas such as traffic control and industrial automation. In WSNs, clustering achieves energy efficiency and scalable performance. A cluster is formed by several sensors nodes, and one of them is elected as cluster-head (CH). A CH collects information from the cluster members and sends aggregated sensed data to the base station (BS) or another CH. The main task of a routing protocol in a WSN is to forward these sensed data to the BS. This paper analyses the advantages of cluster-based routing protocols vs. flat routing protocols in WSNs.
In addition, many intelligent algorithms such as fuzzy method , ant colony algorithm-, and other swarm intelligence methods- are used to optimize route selection among cluster-head nodes in wireless sensor networks. The main idea of heuristic searching algorithm is to get better position by evaluating all positions in state space and then repeat this searching process until finding the object. A* algorithm is one of the kind of heuristic searching algorithms, and it’s usually used to solve optimal routing problems and strategy design problems .
In this article, we have discussed the problem of the guaranteed QoS for flows. First, based on the arrival curve and the service curve in the network calculus, we have presented the system skeleton, involving the sensor node model on virtual buffer sharing, the flow source model and the two-layer scheduling model of sensor nodes and so on. Second, with the system skele- ton, we have not only drawn the node QoS model, such as the upper bounds on buffer queue length/ delay/effective bandwidth, but also drawn the single- hop/multi-hops QoS model, such as the upper bounds on single-hop/multi-hops delay/jitter/effective band- width. Finally, we have shown the practicability and the simplicity of the model and our approach using example results in the article. We can optimize net- work performances by designing reasonable regulators and schedulers of the WSNs nodes. A network calcu- lus approach is as a trade-off between complexity and accuracy. It is general, simple and practicable for pro- visioning the guaranteed QoS in the WSNs and other wireless networks with some characteristics of the dis- tribution and the multi-hops.
14 Read more
this paper, we introduced four data quality indicators, namely, data volume, completeness, time-dependence, and correctness. Theoretic analysis with respect to their relationships was provided. We analyzed the cleaning ef- fect of different order of cleaning strategy and proposed a data cleaning strategy that is suitable for the wireless sensor networks. Additionally, detailed simulations were carried out to demonstrate the correctness and perform- ance of the suggested data cleaning strategy. The pro- posed data cleaning strategy has a significant effect on improving data availability.
11 Read more
The new IEEE standard, 802.15.4,  defines the physical layer (PHY) and medium access control sublayer (MAC) specifications for low data rate wireless connectivity among relatively simple devices that consume minimal power and typically operate in a range of 10 meters or less. An 802.15.4 network can be a star, tree, cluster-tree topology. A device in an 802.15.4 network can use either a 64-bit IEEE address or a 16- bit short address assigned during the association procedure, and a single 802.15.4 network can accommodate up to 64k devices. Wireless links under 802.15.4 can operate in three license free industrial scientific medical (ISM) frequency bands. These accommodate over air data rates of 250 kb/sec (or expressed in symbols, 62.5 ksym/sec) in the 2.4 GHz band, 40 kb/sec (40 ksym/sec) in the 915 MHz band, and 20 kb/sec (20 ksym/sec) in the 868 MHz. Out of total 27 channels allocated for 802.15.4, 16 channels are in the 2.4 GHz band, 10 channels are in the 915 MHz band, and 1 channel is in the 868 MHz band.
6 Hidden and Exposed Terminal Problems: The hidden and exposed terminal problems are distinctive to wireless networks. The hidden terminal problem refers to the collision of packets at a accepting node due to the synchronous transmission of those nodes that are not within the extreme transmission range of sender, but are within the transmission range of receiver. Collision occurs when both nodes transmit packets at the same time without knowing the transmission of each other. The exposed terminal problem refers to the inability of a node, which is interrupted due to transmission by a adjacent transmitting node, to transmit to another node. These two problems appreciably reduce the throughput of a network when the traffic is high. Thus, the MAC protocol must be free from the hidden and exposed terminal problems.
Proposed protocol is a hierarchical routing protocol. Generally in hierarchical routing protocols, routing process is divided into two main different phases. Each phase is done independently. First, routing intra cluster; in this phase packets are routed between sensor nodes and cluster heads. Second, routing inter cluster; in this phase packets are routed between cluster heads and the sink. TECARP focuses on routing intra cluster. Routing inter cluster is considered for future studies. TECARP is composed of 3 phases: network clustering, creating rout- ing tree and data forwarding. In network clustering phase, network nodes are partitioned into different clusters. During this phase cluster nodes information are delivered to cluster head. In creating routing tree phase, a limited routing tree will be created. During this phase a routing table is created for each of cluster nodes. In data trans- mission phase, packets are forwarded using relay nodes routing table. During the time, depend on network cluster status (congestion, fairness and energy) node’s routing table will be updated. In the rest of this section, these phases are discussed in details.
From the literature review, another interest is that the deployment of a wireless sensor network inside a potential landslide area requires knowledge of the node distance relevant to the path loss in order to provide full connectivity. An appropriate propagation model employed to predict such a path loss and the received- signal coverage is therefore essential for network plan- ning. Extensive research has been conducted on propa- gation models, including theoretical and empirical models of wireless sensor networks. Some empirical models have been proposed in order to determine the path loss for a wide range of operating frequencies [12– 18]. Although these proposed models are simple, there is no parameter that controls the relationship between the models and the forest environments. In , a half-space model for dealing with wave propagation at the frequency of 1–100 MHz in forest areas was pro- posed. In this approach, the associated phenomenon dominated by a lateral wave mode of propagation was also discussed. Subsequently, this approach was ex- tended to the dissipative dielectric slab model in order to take the ground effect into account for the wave propagation at the frequency of 2–200 MHz in the forest environment . Recently, near-ground wave propaga- tion was examined in a tropical plantation as seen in the example , where the experiment was conducted at very high frequency (VHF) and ultra-high frequency (UHF) bands. In this approach, the ITU-R model was slightly modified by taking the lateral wave effect into account. Moreover, the ITU-R model was further im- proved with considering the effect of the rain attenu- ation that was measured in Malaysia .
14 Read more
The key purpose of middleware for sensor networks is to support the development, execution, deployment, and maintenance of sensing-based applications. This includes mechanisms for formulating complex sophisticated sensing tasks, communicating this task to WSN, management of sensor nodes to divide the task and distribute to the individual sensor nodes, data fusion for integration the sensor readings of the individual sensor nodes into a high-level result, and reporting the result back to the job issuer. Moreover, appropriate mechanisms and abstractions for dealing with the heterogeneity of sensor nodes must be provided. All mechanisms provided by a middleware system should respect the design values sketched above and the special characteristics of the WSN, which mostly boils down to energy efficiency, scalability and robustness. The scope of middleware for WSN is not limited to the sensor network only, but also covers devices and networks connected to the WSN. Classical infrastructures and mechanisms are typically not well suited for interaction with WSN. Single reason for this are the limited resources of a WSN, which may make it required to execute resource intensive functions or store huge amounts of data in external components. This may result in a secure interaction of processes executing in a traditional network and the WSN. One example of such “ outer ” functionality is called virtual counterparts, mechanism residing in the Internet which supplement real world objects with information-processing capabilities . Thus, middleware for sensor networks should supply a holistic view on both traditional networks and WSN, which is a challenge for architectural design and implementation. Another single property of middleware for WSN is imposed by the design principle application information in nodes. Traditional middleware is designed to accommodate a extensive variety of applications without necessarily needing application information. Middleware for WSN, however, has to provide mechanisms for injecting application information into the infrastructure and the WSN. Data-centric communication mandates a communication paradigm which extra closely resembles content-based messaging systems than traditional RPC-style communication. Moreover, event based communication matches the characteristics of the WSN much improved than traditional request-reply schemes. In general, application and communication specific data processing is more integrated in WSN middleware than in traditional systems. The design principle adaptive fidelity algorithms requires the infrastructure to provide suitable mechanisms for selecting