• No results found

Robust and Energy Efficient Wireless Sensor Networks Routing Algorithms

N/A
N/A
Protected

Academic year: 2021

Share "Robust and Energy Efficient Wireless Sensor Networks Routing Algorithms"

Copied!
73
0
0

Loading.... (view fulltext now)

Full text

(1)

University of Leicester

Department of Computer Science

Robust and Energy Efficient Wireless

Sensor Networks Routing Algorithms

Author:

Mohammad Hammoudeh

Student No:

059012660

Course Title:

MSc Advanced Distributed Systems

Module:

CO7201 Individual Project

First Supervisor:

Alexander Kurz

Second Supervisor:

Emilio Tuosto

(2)

Abstract

Wireless sensor network is a new technology with a wide range of applications such as military, environment monitoring, surveillance, etc. Sensor networks have energy constraints, many-to-one flows, and redundant law-rate data. There are many routing protocols proposed in sensor networks. However, most protocols aim for achieving energy efficiency with little or no attention for robustness and fault-tolerance. In piece of work we consider the area of fault-tolerance in ad-hoc sensor network routing. In particular, we regard the design and development of robust and energy efficient routing protocols that separate between local and large-scale traffic. A new multipath routing protocols is proposed and simulated. This proposed protocol consider the number of hops metric and employs a waiting time before transmitting messages to sink.

(3)

Acknowledgments

I would like to thank all those people who made this thesis possible and an enjoyable experience for me. First of all I wish to express my sincere gratitude to Alexander Kurz, who guided this work and helped whenever I was in need. A special thanks to people in the Cogent Research Group particularly Sarah Mount.

(4)

Table of Content

Abstract 2

Acknowledgments 3

Chapter One: Introduction and Back Ground Information 6

1.1 Sensor Networks Definition 7

1.1 Review of routing in traditional networks 9

1.1.1 Routing definition 9

1.1.2 Routing Algorithms 11

1.1.2.1 Design goals 11

1.1.3 Algorithm types 12

Chapter Two: Routing in Sensor Networks 14 2.1 Difference between sensor networks and traditional ad-hoc networks 15 2.2 Routing protocols design factors in sensor networks 16 Chapter Three: A Survey on Sensor Networks Routing Protocols 22

3.1 Data-centric routing protocols 23

3.1.1 Flooding 23

3.1.2 Gossiping 24

3.1.3 Sensor Protocols for Information via Negotiation 24

3.1.4 Directed-Diffusion 26

3.1.5 Energy-Aware Routing 29

3.1.6 Rumor routing 30

3.1.7 Routing protocols with random walks 31

3.2 Gradient-based routing 31

3.2.1 CADR and IDSQ 32

3.2.2 COUGAR 33 3.2.3 ACQUIRE 34 3.3 Hierarchical protocols 35 3.3.1 LEACH 36 3.3.2 PEGASIS 38 3.3.3 TEEN 39 3.3.4 Self-organizing protocol 41 3.4 Location-based protocols 43 3.4.1 GAF 43 3.4.2 MECN 44 3.4.3 GEAR 46

3.5 Routing Protocols Based on Protocol Operation 47

3.5.1 Multi-path routing protocols 47

3.6 Query based routing 48

3.6.1 Negotiation based routing protocols 48

3.6.2 QoS-based routing 48

3.6.3 Sequential Assignment Routing 48 3.6.4 Maximum lifetime energy routing 49 3.6.5 Coherent and non-coherent processing 49

Chapter Four: New routing protocol 51

4.1 Description of the new routing protocol 52

(5)

4.1.2 Data Transmission Phase 57

4.2 Data aggregation 58

4.3 Simulation 59

Conclusions and Future Work 67

(6)

Chapter One: Introduction and Back Ground

Information

Sensor networks are wireless ad-hoc networks of a set of low-cost hosts that are spatially distributed devices using sensors to monitor a given phenomenon such as temperature, sound humidity, position, motion, and others. The sensor networks are used for diverse applications areas (e.g., health care, agriculture, civil engineering, surveillance, and military). Usually these devices are small in size and resources including energy, memory, bandwidth, and computational power. Since they are inexpensive, they are deployed in large scale within the phenomenon area to monitor physical condition changes. Recent advancement in wireless communications and electronics enabled the development of smaller, cheaper, and power-efficient devices that communicate with each other over a wireless channel. These are self organizing decentralized nodes that communicate directly or through an arbitrary number of intermediate nodes.

(7)

1.1 Sensor Networks Definition

A Wireless Sensor Network (WSN) is an ad-hoc network that consists of tiny sensor nodes with sensing, processing, and computation capabilities. These nodes are densely deployed either inside the phenomenon or near to it to measure ambient conditions in the sensed environment and then transform these measurements into messages that can be communicated through wireless links to the sink node. WSN has been recognized as one of the most important technologies for the 21st century [3]. Due to recent advances in wireless communications and electronics, the production of small and inexpensive low-power multifunctional sensor nodes has become achievable. Sensor nodes can be imagined as small special purpose computers with sensing capabilities as input devices and with limited computation and memory. Famous examples of sensor nodes which are still in research stage are: Mica Mote, WINS, Smart Dust, COTS Dust, etc. Each sensor node is typically composed of a processing unit, memory, sensor, communication unit (usually radio transceivers or optical in Dust nodes), and power source usually AA batteries. Figure 1 shows the block diagram for a Mica2 node [4].

Figure 1: Mica2 block diagram.

The sink node (also known as the base station) is any sensor node connected to a standard PC interface or gateway board [4]. A sink node

(8)

aggregate network data that can be stored or analyzed to reveal the conditions of the observed phenomena.

Figure 2: Communication architecture of WSN

According to [5] deployment of sensor networks can be in random fashion (e.g. dropped by airplane in the phenomena area) or regular (e.g. manual deployment of machine monitoring sensors). Deployment of sensors may be on ground, in the air (e.g. on board of an airplane), under water (e.g. Columbia River estuary monitoring), in vehicles, inside buildings, and inside human body [6]. Naturally, WSN are networks of large number of nodes which collaborate to monitor phenomena and report collected data over wireless links. As mentioned above, each node has a communication unit, typically RF, to communicate with other nodes or directly to the sink node or base station. Figure 2 shows schematic communication architecture of WSN. As Figure 2 shows, sensor nodes are densely scattered over the sensor area where the phenomena is located and the sensor nodes are deployed. Each node collects data via sensors and route data either to other sensor node or to external base station. Sensor nodes organize themselves to provide high-quality data about the phenomena conditions, this can be done by dividing work among nodes

(9)

to save energy by entering to sleep mode according to contribution to the sensing coverage.

The following features enable wide and numerous applications for WSNs [7]. A WSN has close connection with its immediate environment without disturbance to environment, animals’, plants, etc. Furthermore, WSN is an economical method for long-term data gathering (one deployment, much utilization) and it avoid unsafe or unwise repeated field studies. For instance, rapid deployment, self-organization, and fault tolerance characteristics of WSN make them of high value for military uses including intelligence, surveillance, and targeting systems [8]. In health, WSN also has profound effects like monitoring disabled people and even biomedical sensors could help to help to create vision. Many other applications exist like habitat monitoring, environmental observation and forecasting systems (e.g. using thermal sensors to monitor temperature in a forest), and commercial applications including managing inventory, monitoring and controlling machines, monitoring product quality, safety, etc.

1.1 Review of routing in traditional networks

1.1.1 Routing definition

Routing is defined in [1] as “the act of moving information across an internetwork from a source to destination”. Routing has been a hot research issue since 1970s but it has achieved commercial popularity in the mid 1980s with the growth of large-scale heterogeneous networks. Routing is usually divided into two separate activities:

Determining optimal routing routes: routing protocols use routing tables to maintain route information. Different routing algorithms maintain different rout information such as destination and next hop that means the

(10)

best path to a specific destination is through the next hop on the way to the final destination. When a router receives a packet it, it checks the destination address to see if the packet is addressed to itself otherwise it forwards the packet to next hop. Figure 3 shows a simple routing table.

Figure 3: Destination, Cost, Next Hop routing table

Routing tables can contain other information and compare metrics to determine the best path for a packet to travel to its destination; these metrics differ between various algorithms. Routing protocols use different metrics or a combination of multiple metrics called hybrid metrics, an example of simple metrics include path length, link reliability, delay, traffic/congestion, bandwidth, and cost. Routers maintain up to date routing tables through the exchange of routing update messages that consist of all or a portion of a routing table. Router can build a complete picture of the network topology by evaluating routing updates from other routers and through link-state advertisements.

Transporting packets through and internetwork: this is straightforward process known as packet switching. Packet switching occurs at Layer 2 (the data link layer) of the OSI model, whereas routing occurs at Layer 3 (the network layer). Figure 4 shows a simple switching table.

(11)

1.1.2 Routing Algorithms

Routing algorithms can be classified into two broad classes according to [1].

1.1.2.1 Design goals

Algorithm designer goals must be implied on the operation of the resulting routing protocol. Each routing algorithm may have one or many design goals, these design goals include:

Optimality: is the ability to select the optimal rout to transfer message from source to destination. The best path is calculated using weighted metrics, for example, path length and link reliability are used by routing algorithm where the former metric is given more weight than the other metric.

Simplicity and low overhead: routing algorithms design must be simple with high level of functional efficiency. Efficiency means to do the job with minimum resource utilization and overhead. Efficiency is with high value when routing algorithms is to run on computers with limited physical resources such as sensor nodes.

Robustness and stability: a routing algorithm is said to be robust when they have the ability to handle emerging or unexpected situations like link failure and endure numerous network conditions.

Rapid convergence: convergence is part of routing table update process. Convergence occurs when all nodes have consistent routing tables and correct distance information, when a link or node fails, the neighbouring nodes notice it and update their rout entries and stimulate recalculation of optimal routes so that all routers agree on these routes. Slow convergence has lethal consequences such as routing loops or network outages [1, 2].

(12)

Flexibility: refers to the ability of a routing algorithm to adapt rapidly to different network conditions.

1.1.3 Algorithm types

Most algorithms lie in one of the following broad categories [1]:

Static versus Dynamic: static routing tables is created and maintained by the network administrator and cant react dynamically to network changes. Static routing algorithms best fit small and steady network environments and they do not suit ad-hock networks. Dynamic routing the dominant routing algorithm [1] now a days, these algorithms adjust dynamically to changes in network conditions by exchanging routing updates that could be sent periodically or triggered by network changes. After receiving rout update messages, routing algorithms recalculate their entries accordingly. Hybrid algorithms that use a combination of both static and dynamic algorithms are also possible.

Single-Path versus Multipath: multipath routing algorithms also known as load sharing algorithms that support one or more path to a certain destination and use multiplexing over these paths which lead to improved throughput/less delay and more reliability.

Flat versus Hierarchical: flat routing is simply a system where all routers are peers. Whereas hierarchical routing systems has some routers acting as a backbone where all packets from non-backbone routers are forwarded towards the backbone until they arrive the last router in the backbone which is the closest to the destination or in the same domain as the destination. At this point autonomous routing is used to deliver the packet to the final destination directly or through one or more intermediary nodes. Each backbone router can only communicate with nodes in its domain or with other routers. The prominent advantage of hierarchical algorithm is the simplified

(13)

organization and efficient traffic isolation within limited domains. In addition, the router design itself could be simplified and more efficient since it has only to know about its domain and other routers.

Intradomain versus Interdomain: Intradomain routing (e.g., Open Shortest Path First) function within domains using routing metrics. While, Interdeomain routing (e.g., Border Gateway Protocol) operate between domains and is more concerned with reachability and policy.

Figure 5: Interdoamin versus Intradomain

Link-State versus Distance Vector: Link-State (shortest path first) is the second major class of intradomain routing algorithms [2]. Link-State algorithms operate by disseminating information about the whole network to every node through flooding link-state packets (LSP); these updates are small in size. Link-state protocols can converge more rapidly and are more scalable than distance vector protocols but require more processing and memory capabilities. In distance vector (Bellman-Ford) algorithms, each node maintains a vector containing distances to all other nodes and sends it only to its neighbours. The messages send in distance vector algorithms are large in size but sent only to direct nodes neighbours.

(14)

2 Chapter Two: Routing in Sensor Networks

In WSNs, robust routing algorithms that improve energy and bandwidth utilization are highly desirable. Realization of the inborn characteristics mentioned in section one, sensor network application requires wireless ad hoc networking techniques [8]. However, the existing protocols and algorithms proposed for traditional wireless ad hoc networks like cellular networks. Due to their unique features and applications, routing in WSNs is a very challenging and difficult problem. In this chapter we study the difference in routing protocols between traditional networks and sensor networks. We also study the design factors of routing protocols in sensor networks.

(15)

2.1 Difference between sensor networks and traditional ad-hoc networks

In the following the differences between sensor networks and ad hoc networks are listed:

1. It is quite difficult to adopt a global identification scheme due to the large number of sensor nodes and the high overhead cost required for ID maintenance. In contrast to sensor networks, traditional networks use IP address as global identification thus IP-based protocols can not be applied to WSN. In some cases, getting the data is more important than the sender node ID [5].

2. Unlike traditional communication networks, sensor networks mainly use the broadcast communication paradigm typically to transmit sensed data from multiple sources to the sink node. Whereas most ad hoc networks are based on multicast or peer-to-peer communications. 3. Sensor nodes require careful resource management as they have

limited energy, low bandwidth radio, limited processing capabilities, and storage.

4. Sensor nodes can be mobile which results unpredictable and frequent topological changes. Topological changes can also be caused by nodes failure or entering inactive state.

5. In WSNs routing is application dependent and design goals vary among different applications. For example, in military applications low energy is higher weighted than robustness whereas in emergency rescue and response the opposite is true.

6. Position awareness of sensor nodes is necessary since data is collected on location bases [5]. Sensor networks are deployed in ad hoc fashion and they operate in unattended mode. As a result, nodes need to discover neighbours and form connections. Many solutions were

(16)

proposed for finding position like triangulation based algorithm proposed in [original]. Algorithms based on triangulation approximate their position using radio strength from few nodes who know their position using Global Positioning Systems (GPS).

7. WSNs are data-centric networks since data is collected based on a specific attribute or query (e.g., only nodes that sense temperature over 40 c need to response for query of the form “>40c”). Since many nodes may response with same data about the same phenomena, the generated data will contain significant redundancy. This redundancy could be used in positive way by routing protocols to improve energy and bandwidth utilization or even error elimination.

8. The number of sensor nodes in a sensor network can be several orders of magnitude higher than the nodes in an ad hoc network [8].

9. Sensor nodes are densely deployed (from tens to thousands).

10. Sensor networks are fault-prone (due to physical damage or environmental interference) and since their on-site maintenance is infeasible, scalable self-healing is crucial for enabling the deployment of large-scale sensor network applications.

11. Unreliable communication channels (harsh environment or battery depletion).

2.2 Routing protocols design factors in sensor networks

Many researchers are currently working on developing new algorithms or protocols that take into consideration the differences between sensor networks and ad hoc networks along with applications and architectural requirements. Since in sensor networks topological changes are very frequent, network routes need frequent updates as well. Maintaining up to date routing tables is challenging process due to energy restrictions and other factors. Most of the existing protocols focus on reducing energy

(17)

consumption and ignore other important design factors such as fault tolerance, scalability, etc. To reduce energy expenditure, routing protocols proposed in the literature for sensor networks adopt existing well known routing strategies as well as new strategies developed specially to meet the inherent properties of sensor networks. Almost all of the routing protocols can be broken down based on the routing techniques as flooding, gradient, clustering, and geographic. Moreover, these protocols can be classified according to the network structure as flat, hierarchal, or location-based. In flooding protocols data is broadcasted to all neighbouring nodes, while in gradient protocols the number of hops is memorized when the data is disseminated through the whole network [9]. In clustering protocols, cluster-heads nodes are selected so the high energy dissipation in communicating with the sink node is spread to all sensor nodes in the network [10]. The last category of protocols, geographic, employs position information to deliver the data to specific area rather than the whole network [11]. In the following we review the most prominent factors that have bearing on routing protocols design.

1. Fault Tolerance: Loosely speaking, fault tolerance means reliability and availability. Sensor nodes may fail due to software errors (e.g., timing failure [12]) or hardware errors (e.g. physical damage, lack of power, or environmental interference). The failure of sensor nodes should not block the whole network, instead routing protocols must find alternative routes to the data collection sink and sustain the overall function without any interruption due to sensor node failures [8]. This is known as the reliability or fault tolerance issue. Fault tolerance requires signalling rates and dynamic transmit power adjustments or forward packets through nodes with higher energy.

(18)

Consequently, redundancy could be useful in a fault tolerant sensor network.

2. Scalability: Scalability is one of the features of a good network design. Data-link and physical-layer technologies may change quite often but the network control plane, of which routing is the corner-stone, must last many generations of underlying technology. Sensor networks consist of large number of sensor nodes ranging from hundreds to thousands depending on the application. Routing protocols must be able to operate correctly with such large number of nodes and exploit the density of sensor networks in saving energy, distributing load, and fault correction. Furthermore, routing protocols must be able to dynamically interact with environmental events. When the phenomena conditions are stable some nodes enter into sleep mode while other nodes keep the network functioning with high performance. When important events occur sleeping nodes automatically change state to active and start operating again.

3. Node cost: The number of sensor nodes used to monitor a phenomenon may reach thousands of nodes. Because of this large number of node, the cost of a single node has to be kept low. If the cost of the network is more expensive than deploying traditional sensor, the sensor network is not cost justified [8].

4. Hardware constraints: Typical sensor node is made of power unit, processing unit, sensing unit, and a transceiver [13]. They also have application specific components like power generator, mobilizer, and location finding system. Sensing unit is composed of sensors that can sense a variety of phenomena attributes (e.g. temperature, pressure, etc…) and ADC unit that convert analogy inputs into digital signals that can be processed by the processing unit. The processor is normally coupled with small memory to carry out functions that make

(19)

sensor node collaborate with other nodes to achieve desired task. The transceiver is the communication unit that enable sensor node to interact with peer sensors or directly with the sink. The power unit is also a major unit that support node with energy; some times it is also associated with power generator (e.g. solar cell). Other application specific components include mobilizer to handle node mobility and location finding system to find the location of a sensor node.

5. Transmission media: In sensor networks wireless communication medium takes many forms including radio, infrared, or optical media. The µAMPS sensor node [14] uses a Bluetooth 2.4 GHz transceiver. In [15] a wireless sensor node uses a single channel RF transceiver operating at 916 MHz. The Wireless Integrated Network Sensor (WINS) [16] is based on radio links for communication. Optical media is another important transmission media that the Smart Dust mote [17] uses for transmission. Infrared is a license free communication media used by sensor nodes deployed in high interference environments. Similar to optical media, infrared require line of sight between two communication endpoints.

6. Environment: Sensor networks operate in unattained environment whenever they are deployed in remote areas. They may be working under the soil, embedded in machines, under water, or in biologically contaminated field, in a home, or building.

7. Sensor network Topology: Hundreds to several thousands of nodes are deployed throughout the sensor field [8]. They are deployed in high density as 20 nodes/m3 [14], this high density require careful handling of topology maintenance. In [8], topology maintenance and change is viewed as three phases:

a. Pre-deployment and deployment phase: deployment of sensor nodes in the sensor field can take several forms:

(20)

i. Random deployment: place sensor in remote or dangerous environments without any communication lines. This could be done by dropping nodes from an airplane, rocket, etc. This type of deployment is common in disaster management and military application.

ii. Manual deployment: nodes are placed one by one in the sensor field. This can be done by human or robot. This type of deployment is widely used in agricultural (e.g., planted underground) applications.

b. Post-deployment phase: after deployment, topological changes occur due to many factors such as coverage, interference, power management, task, etc.

c. Redeployment of additional nodes phase: to compensate for nodes that fail due to energy reasons or physical damage, a redeployment of new sensors may be necessary to keep network functioning.

8. Power: Wireless sensor nodes are usually powered by batteries which make sensor node life time directly dependent on battery life time. Sensor node collects data about the environment and performs a quick local processing on sensed data before transmission. Thus, power consumption is distributed over the three tasks: sensing, processing, and transmission. In addition, nodes also can act as a router of intermediate node that reroute other nodes data. As nodes fail, possibly due to power failure, routing paths need to be updated. Due to the aforementioned reasons, power conservation is acquiring more importance and researchers are paying more attention on designing of power aware protocols for sensor networks.

(21)

9. Data Aggregation: Since sensor nodes may generate significant redundant data, similar packets from multiple nodes can be aggregated to reduce the number of transmissions [18]. A variety of aggregation functions can be sued to combine data from different sources such as duplicate suppression, minima, maxima, and average [19]. If sensor nodes were allowed to in-network data reduction, some of the mentioned aggregation functions can be applied fully or partially in each sensor node [20]. According to [18], communication consumes more energy than processing would take. For this reason, this technique has been applied in many routing protocols to achieve energy efficiency and data transmission [10]. Signal processing techniques can also use aggregation. In this case its is referred to as data fusion where node use techniques like beam-forming to combine signals and reducing noise in these signals to produce accurate output signal.

10. Data Delivery Method: Data delivery to the sink can be classified as time-driven, event driven, or hybrid method using a combination of any of these categories depending on the application. The time-driven is used in applications that transmit data periodically. In event-driven, data transmission occurs as a response to drastic events or responds to query generated by the sink or other node. Data delivery method has significant impact on routing protocols in terms of energy minimization and route convergence.

(22)

3 Chapter Three: A Survey on Sensor Networks

Routing Protocols

In this section we focus our study to the network layer of sensor networks describing and classifying different routing protocols. A routing protocol is said to be adaptive if it has the ability to change dynamically to adapt to current network and node conditions such as energy available. Routing in sensor networks can be categorized based on the network structure into flat-based routing, hierarchical-based routing, and location-based routing. In flat-based routing, all nodes are peers with equal roles and functionality. In hierarchical-based routing, nodes are distributed to different levels where each level reflect different role in the network. In location-based routing, data routing decisions built on location information. Additionally, sensor networks routing protocols can be classified depending on the protocol operation into multiplepath-based, negotiation-based, and QoS-based, or coherent-based routing. Furthermore, routing protocols can be categorized based on how source find a rout to destination (normally to the sink) into proactive, reactive and hybrid. In proactive protocols, all routes are computed and stored into routing tables before they are needed. Whereas, in reactive protocols routes are computed on only demand. Hybrid protocols are a mixture of both classes. Another class of routing is called cooperative routing. In this routing class, data is sent to a central node where it can be aggregated or undergo processing to reduce energy consumption. Many other classifications exist such as timing or location based classes.

(23)

3.1 Data-centric routing protocols

Nodes address is the base of routing in traditional networks. However, in sensor networks it is not feasible to assign global identifiers to sensor nodes due to the large number of nodes. This makes it very difficult to query a set of nodes and avoid transmitting data from all nodes in the network which is energy inefficient. Many routing protocols were proposed to solve this problem known as data-centric protocols. In this section we will examine a set of these protocols in details and compare them to each other with highlight on points of strength and points of weakness for every protocol.

3.1.1 Flooding [20]

Originally was developed for traditional networks but can be used for routing in sensor networks. Flooding is a reactive technique that does not require complex routing algorithms and topology maintenance; each sensor node upon receiving a packet, data or network traffic, forward it to all of its neighbours. This process is repeated until packet has reached its destination or exceeded the maximum number of hops allowed. This algorithm has a number of deficiencies listed in [21]:

Implosion: Happens when a node receive several copies of the same packet from several neighbours. For example if a sensor node X has N neighbours that are also neighbours of sensor node Y, then Y will receive N copies of the same packet.

Overlap: If many nodes observing the same area, then neighbours will receive duplicated messages about the state of the shared area.

Resource blindness: Flooding is not energy aware. An energy efficient protocol must adapt to the amount of energy available and change behaviour accordingly.

(24)

3.1.2 Gossiping [20]

Gossiping is an enhanced version of flooding in which incoming packets forwarded to only one is randomly selected neighbour rather than broadcast to all neighbours. Each sensor node repeats the same process until packet is delivered to its final destination. This algorithm eliminates the implosion problem by having only one copy of the message at any sensor node. However, this algorithm cause propagation delays due to the long time to send the packet to all sensor nodes.

3.1.3 Sensor Protocols for Information via Negotiation (SPIN) [21]

A family of adaptive protocols designed to overcome the deficiencies of flooding by negotiation and resource-adaptation. The basic idea to make sensor nodes operate more efficiently and conserve energy is to transmit meta-data, data that describes data, instead of the actual data itself since meta-data is shorter than the actual packets. The meta-data has one-to-one relationship with the actual data. This means two pieces f indistinguishable data share the same meta-data and two distinguishable data does not share the same meta-data. The meta-data should be smaller in size that the actual data to be beneficial for SPIN. The format of the meta-data is application specific and not specified in SPIN. For example, sensor nodes may use their own unique IDs to present meta-data if they cover a certain known area. The SPIN family of protocols uses meta-data negotiation to address the problem of flooding to conserve energy by sending meta-data instead of sending the data itself. SPIN is a three-stage protocol where each stage uses different type of message to communicate. It has three types of messages, which are ADV, REQ, and DATA. When a SPIN node has data to transmit it, it broadcast an ADV message containing meta-data of the DATA. If a neighbour receiving and ADV is interested in the data, it sends a REQ message for the DATA

(25)

and sensor node originating the ADV message send data to this neighbour. If a sensor node does not respond for an ADV message the sender implicitly understand that the node is not interested which reduce messaging overhead. SPIN also use a random time for REQ messages to prevent overlap. This process is repeated at each neighbour node until every node in the entire sensor network that is interested in the data will get a copy of the DATA message.

The SPIN family of protocols include many protocols including [21] [23] [24]:

 SPIN-1: each node use negotiation before transmitting data to ensure that only useful data will be transferred (no duplicate messages or overlap).

 SPIN-2: threshold-based energy aware copy of SPIN-1. Also each node has resource manager. When energy in a node approaches a lower energy threshold, it reduces its participation in the protocol (e.g. it only participates when it has enough energy to complete the three stages).

 SPIN-BC: broadcast channels protocol.

 SPIN-PP: designed for point-to-point communication.

 SPIN-EC: similar to SPIN-PP but with an energy heuristics.

 SPIN-RL: similar to SPIN-PP but with adjustments to deal with loosely channels.

SPIN was compared experimentally to flooding and gossiping in [21]. Results gave SPIN-1 performance compared to time is same as flooding but suing 25% of energy, whereas SPIN-2 delivers 60% more data per unit of energy than standard flooding. Both SPIN-1 and SPIN-2 performs gossiping and come close to ideal dissemination protocol. In terms of mode, SPIN nodes and user can be mobile or stationary and events

(26)

monitored could also be continuous or target detection. SPIN has many points of strength and drawbacks; some of these are summarized in the following.

Points of strength:

 Simple negotiation

 Topological changes are localized

 Works for mobile sensors and users

 Robust since it is immune to node failure

 Scalable since it needs local interaction with neighbours only

 Solve deficiencies of classical flooding efficiently

 Small delays Points of weakness:

 Use flooding

 Nodes are always active consuming more energy

 Advertisement methods can not guarantee the delivery of data In conclusion, SPIN family of protocols are simple protocols that disseminate data efficiently without maintaining any topological information or neighbours state. This makes it very efficient for environments with mobile nodes or users since forwarding decisions are build on local neighbourhood information.

3.1.4 Directed-Diffusion

The directed-diffusion is a data-centric paradigm for sensor networks proposed by Intanagonwiwat et al in [20]. It is data-centric in the sense that all sensor nodes data is named by assigning attribute-value pairs. The sink query data by broadcasting interest which is a task description defined using list of attribute-value pairs such as interval, geographical area, etc. The interest message contains a timestamp field and several gradient fields. A gradient is a reply link to the sensor node from which

(27)

the interest was received. Each node receiving the interest stores a copy in the cash to compare the sensed data with the interest and to prevent loops. The interest and data propagation and aggregation are decided locally. As the interest is propagated throughout the sensor network, gradient from the source back to the sink are set up to draw data satisfying the query toward the requesting node. Every node that receives the interest setup a gradient until gradients are drawn from the source back to the sink.

The amount of information flow depends on the strength of the gradient toward different neighbours. For example, when the interest fit gradients, multiple pates of information flow are formed but only the best paths are reinforced to avoid flooding and then reduce communication overhead. The nodes ability to do data aggregation is known as a minimum Steiner tree problem [19]. When the sink starts to receive data from the source it periodically retransmit the original interest message with smaller interval of time to reinforce the source node to send data more frequently on a specific path. This is essential because interests are not reliably transmitted. When the path from source node to sink fails an alternative path can be found and used.

In directed-diffusion networks, nodes are application aware which help to save energy by selecting best-paths, cashing, and processing data locally. Cashing is the essence of directed-diffusion which increase scalability and robustness of sensor networks. This paradigm is well suited to persistent queries where sink does not expect data that satisfy a query for a certain period of time. For the aforementioned reason, it is unsuitable for one-time queries since it will be expensive to draw gradients for one use only. For example, directed-diffusion can not be applied to monitor environmental conditions because it is on-demand model which is not beneficial in such application domain. In [5] three factors that affect the

(28)

performance of data aggregation used in directed-diffusion were pointed: location of the source node within the network, number of sources, and the network topology. In directed-diffusion, the energy saved with data aggregation is spent to improve robustness with respect to sensed phenomena dynamics. Based on the previous discussion we know that users and sensor nodes are stationary in this paradigm and event observed (query) is sent back to sink by unicast or multicast while interest could be transmitted by broadcast, multicast, or unicast.

In SPIN communication is initiated at sensor nodes by advertising the availability of data giving way for nodes interested with advertised data to send query request. However, in directed-diffusion communication is started by a query sent by the sink to sensor nodes by flooding some tasks (on-demand). In the following we list some of the strength and weakness points in directed diffusion paradigm.

Points of strength:

 Scalable, since it is based on local interaction only

 Latency is minimized by selecting best path among multiple available paths

 Compared to flooding and data aggregation it has less traffic which reduce energy consumption

 It is robust due to law data rate gradients and interest retransmission

 On demand data transmission Point of weakness:

 High cost of gradient setup

 It is not energy aware as the best paths might be always used

 Absence of mechanisms to select alternative path and interest retransmission

(29)

 Not suitable for applications that require continuous data delivery to sink such as environmental monitoring

 Naming schemes are application dependent and require to be specified before deployment

 Comparing data and interest might add processing overhead at the sensor nodes

3.1.5 Energy-Aware Routing

Energy-Aware Routing [25] is a destination-initiated reactive protocol that aims to increase network lifetime. This protocol differs from directed-diffusion in the sense that it uses a set of sub-optimal paths to increase network lifetime instead of using the minimum energy path which will deplete the nodes energy. These paths are chosen and maintained by means of energy-consumption dependent probability function. The protocol assumes that each node is addressable through a class-based addressing which includes the location and types of the nodes. The protocol can pass through three phases:

i. Setup phase: The protocol initializes routing tables and discovers routes through localized flooding. In this phase the total energy cost in each node is calculated. Routing tables are constructed by assigning probability to each node neighbours corresponding to the node cost. Paths that have a very high cost are discarded.

ii. Data communication phase: Nodes use the routing tables constructed in the previous phase to send data to destination with probability inversely proportional to their cost.

iii. Route maintenance: Perform localized flooding as in the initialization phase to update routes and keep all the paths alive. In Energy-Aware Routing approach path from source to destination is calculated in similar way to directed-diffusion and path is randomly

(30)

selected form a list of alternatives. However, in directed-diffusion data is sent through multiple paths and only one of them is reinforced. The use of single path make recovering from path failure very complicated process compared to directed diffusion. In addition, setting up the addressing mechanism for nodes and collecting location information add more complexity to the approach.

3.1.6 Rumor routing

Rumor routing [26] is a variation of directed-diffusion intended for applications where geographic routing is not possible. When there is no geographic criterion to diffuse task, directed-diffusion floods the query to all nodes in the network. It is not always necessary to flood the network especially when the amount of data requested from nodes is small. An alternative approach is to flood events if number of events is small and the number of queries is large. In this approach, only nodes that have sensed a particular event receive the queries rather than flooding the entire network to retrieve information about observed events.

To flood events through the network Rumor routing nodes maintain an event table whose entries are observed events and an agent which is a long-lived packets. Agents traverse the network to propagate information about observed events to distant nodes. Nodes that know the route may respond to queries by checking their event tables. Unwanted flooding can be avoided by that way reducing costly communication overhead. Rumor routing keeps only a single path between communication end-points as opposed to directed-diffusion that maintains multiple paths at low rates. Simulation results revealed that Rumor routing performance is inversely proportional to the number of events. It has been shown that Rumor routing is robust and can recover from failures efficiently. In addition, it has achieved a significant energy savings compared to classical flooding.

(31)

When the number of events becomes very large and there is no enough interest in these events, the cost of maintaining agents and event-tables in every node may become very high or even infeasible. Another concern in this approach is to use parameters such as time-to-live to manage the overhead for queries and agents. The selection of net-hop in Rumor routing is affected by the way the route of an event agent is defined.

3.1.7 Routing protocols with random walks

Proposed in [27] for large-scale network where nodes have limited mobility. Random-walks-based routing paradigm aims to achieve load balancing using multipath routing in a statistical manner. Similar to energy-aware routing, this protocol assumes that each node has a unique identifier but without the need of location information, it is also assumed that each node can be turned on or off at random times. Topology can be irregular but nodes were placed at crossing point of a regular grid on a plane. Using the distributed asynchronous version of Bellman-Ford algorithm to calculate to the location information, the rout from source to destination can be identified. According to a computed probability, on intermediate node would select the neighbouring node that is closer to the destination as the next hop. The load balancing is achieved by carefully calculating this probability. In conclusion this protocol is simple as nodes need to maintain little state information and different routes are selected between the same pair of source and destination of different times. However, the drawback of this algorithm is the topology; network topology may be impractical.

3.2 Gradient-based routing

Gradient-based routing (GBR) is another variant of directed-diffusion proposed by Schurgers et al. [9]. The basic idea is to memorize the number of hops when interest is flooded through the network. Hence,

(32)

each node can calculate the minimum number of hops to reach the sink. This parameter is called the height of the node. The gradient on a specific link between two neighbouring nodes is calculated as the difference between a node’s height and that of its neighbour. A packet is forwarded of a link with the largest gradient. GBR uses some auxiliary techniques to uniformly divide the traffic over the network; such techniques include data aggregation and traffic spreading. When a node fall in multiple paths, acts as a relay node, it can create data combining according to a certain function. On the other hand, three different data spreading techniques have been presented:

1. Stochastic-based: when there are two or more next hops with equal gradients the node chooses one among them randomly.

2. An energy-based scheme: when energy approaches a certain threshold, a node increases its height to discourage other sensor nodes from sending data to that node.

3. A stream-based scheme: divert new streams away from nodes that are currently part of the path of other streams.

Data spreading contribute to achieve balanced distribution of the traffic in the network which is the main objective of this algorithm. Also the discussed techniques for traffic load balancing and data fusion are also applicable to other routing protocols for enhanced performance. Simulations results of GBR have shown outperform directed-diffusion in terms of total communication energy.

3.2.1 CADR and IDSQ

Two routing techniques were proposed in [28]: Constrained anisotropic diffusion routing (CADR) and information-driven sensor querying (IDSQ). CADR aims to be a general form of directed-diffusion. The main idea is to query nodes and route packets such that information gain

(33)

is maximized while latency and bandwidth are minimized. CARD generate queries using certain criteria to select sensor that can get the data through acting only the sensors that are near event of concern and dynamically adjusting data routes. The difference from directed-diffusion is the consideration of information gain in addition to the communication cost. In CADR, each node evaluates on information/cost gradient and end-user requirements. Estimation theory was used to model information utility technique. However, IDSQ does not specifically define how the query and information are routed but it gives a mechanism of choosing the best order of sensors for maximum incremental information gain. Therefore, IDSQ can be viewed as a complementary optimization procedure. Simulation results confirmed that these techniques are more efficient than directed-diffusion where queries are diffused in an isotropic fashion and reach closest neighbours first.

3.2.2 COUGAR

Proposed in [29] where the network is viewed as a huge distributed database system. The basic idea is to abstract query processing through using declarative queries. In addition, COUGAR utilizes in-network data aggregation to achieve more energy savings. A new abstraction layer was added between the network and application layers to provide network-layer independent method for data query. The architecture proposed in COUGAR has a node called the leader that performs aggregation and transmit the data to the sink. The leader is selected by other sensor nodes in the distributed database system. The sink generates a query plan that specifies the necessary information about data flow, in-network computation for incoming query and sends it to the relevant nodes, and describes how to select a leader for the query. This architecture provides

(34)

in-network computation ability that can provide energy efficiently when the generated data is large. Nevertheless, COUGAR has three prominent drawbacks:

1. The addition of new layer, query layer, at each sensor node adds extra energy consumption and storage overhead.

2. In-network data computation requires synchronization among nodes prior sending any data to leader node.

3. Leader nodes require dynamic maintenance mechanism to prevent depleting these nodes.

3.2.3 ACQUIRE

Is a brand new idea proposed by Sadagopan et al. in [30]. Active Query forwarding In sensoR nEtworks views the network as a distributed database where complex queries can be divided into sub-quires. The operation mechanism can be described as follows: The query is generated by the sink and each node receiving the query tries to respond to the query partially using its pre-cashed information then forward it to other sensors. The nodes whose cash is not up-to-date gather information from their neighbours within a look-ahead of d hops. Once the query is being resolved completely, it is sent back through either the reverse or shortest path to the sink. Thus ACQUIRE deal with complex queries by allowing many nodes to send response. Other data-centric protocols such as directed-diffusion uses flooding-based query mechanism for continuous and aggregate queries which make them impractical for complex queries due to energy constraints. ACQUIRE algorithm provide efficient querying by adjusting the value of the look-ahead parameter d. Note that when d is equal to the network diameter, ACQUIRE behaves like standard flooding and when d is very small then the query has to travel more nodes.

(35)

A mathematical modelling has been used to derive the optimal value of the look-ahead d for a grid setup of sensors where each node has four immediate neighbours. However, there is no validation results by simulation and the reception cost have not taken into consideration during modelling.

To select the next node for forwarding the query was addressed in CADR [28] and Rumor Routing [26]. In CADR, query nodes use IDSQ mechanism to determine which node can provide most useful information by using estimation theory. Rumor routing tries to forward query to node which knows the path to the searched event. In [30], the next hop is either chosen randomly or based on maximum potential of query satisfaction.

3.3 Hierarchical protocols

Also called cluster-based protocols, is a two tires protocol known for their scalability and efficient communication. Originally was developed for traditional wired-networks but latter was utilized to perform energy-efficient routing in sensor networks. A single tire network can cause congestion at the gateway especially with high sensors density. This leads to communication delays and inadequate tracking of events in addition to limited scalability. To overcome these problems without degrading the service, network clustering has been proposed in some routing approaches. In hierarchical architectures clusters are created and special tasks are assigned to cluster-heads, this must be done with intensive care because it has a great impact on the overall system scalability, network lifetime, and energy consumption. In hierarchical protocols high-energy nodes can be used to process and send information while low-energy nodes can perform sensing. Hierarchical routing is two-tire routing where one tire is used to elect cluster-heads and the other

(36)

for routing. Cluster-heads reduce energy consumption by performing data aggregation and fusion which decrease messaging towards the sink. In this section we will review hierarchical routing protocols.

3.3.1 LEACH

Low Energy Adaptive Clustering Hierarchy (LEACH) [10] is one of the first hierarchal routing algorithms for sensor networks. LEACH randomly select sensor nodes as cluster-heads and rotates this role to distribute energy load among the sensors since using the same node will deplete its energy. The cluster heads aggregates data received from nodes to reduce the number of messages sent to the sink. This approach is well-suited for applications where constant monitoring is needed in which data periodic collection is centralized. This will save energy as only cluster-head nodes transmits to the sink rather that all sensor nodes. Based on simulations the optimal number of cluster heads is estimated to be five percent of the total number of nodes.

The operation of LEACH is split into two phases, the setup phase and the steady state phase. In order to minimize overhead the duration of steady phase is longer than the setup phase. During the setup phase, the clusters are created and cluster heads are selected. This selection is made by the node choosing a random number between zero and one. The sensor node is a cluster-head if this random number is less than the threshold T(n) calculated as the following:

Where P is the desired percentage to become a cluster head, r is the current round, and G is the set of nodes that have been selected as a cluster head in the last 1/P rounds. After cluster-heads are chosen they broadcast an advertisement to the entire network that they are the new

T(n)=

P/[1-P*(rmod(1/P))] if n Є G 0 otherwise

(37)

cluster-heads. Every node receiving the advertisement decides to which cluster they want to belong depending on the signal strength. The sensor node sends a message to register with cluster-head of their choice. The cluster-head based on a TDMA approach assigns each node registered in its cluster a time slot when it can send data.

During the sensing phase cluster nodes can start sensing and transmitting data to the cluster-heads. All the data processing such as data fusion and aggregation are local to the cluster. After a certain period of time spent on the steady phase, the network enters the setup phase again and start anew round of selecting cluster-heads.

LEACH is able to increase the network lifetime. It achieves over a factor of seven reduction in energy dissipation compared to direct communication and a factor of four to eight compared to the minimum transmission energy routing protocol. LEACH has a number of drawbacks listed in [5]:

1. Assumes that all nodes can transmit with enough power to reach the sink.

2. It is not applicable to networks deployed in large regions.

3. Dynamic clustering brings extra overhead, e.g. head changes, advertisements, etc., which may diminish the gain in energy consumption.

4. Assumes that all nodes begin with the same amount of energy and a cluster-head consumes approximately the same amount of energy.

5. Assigns time slot to each node even it node has no data to transmit.

SPIN LEACH Directed-diffusion Optimal route No No Yes

Network lifetime Good Very good Good Resource awareness Yes Yes Yes Use of meta-data Yes No Yes

(38)

Leach with negotiation is an extension of LEACH proposed in [10]. It use meta-data as in SPIN prior data transfer to ensure that only interesting data is transmitted to head-clusters before being transmitted to the sink. Table 2.1 taken from [5] shows a small comparison between SPIN, LEACH, and directed-diffusion.

3.3.2 PEGASIS

Power-Efficient GAthering in Sensor Information Systems an enhancement over the LEACH protocol was proposed in [31]. As opposed to LEACH, PEGASIS has no clusters, instead it creates chains from sensor nodes so that each node communicate only with their closest neighbours and only one node is selected from the chain to communicate with the sink. When the round of all nodes communicating with the sink ends, a new round starts and so on. This allows distributing energy consumption uniformly among all nodes. PEGASIS forms near optimal chins in a greedy way. The use of collaborative techniques increases node and network lifetime in addition it reduce the communication bandwidth by local coordination between close nodes.

PEGASIS nodes use signal strength to measure the distance to neighbouring nodes. Each node aggregate data to be sent to the sink by any node in the chain and the nodes in the chain will take turns sending to the sink. Simulation results in [31] showed that PEGASIS outperformed LEACH about 100 to 300% for different network sizes and topologies. This performance is achieved through the use of data aggregation and the reduction of overhead brought by cluster formation. In order to rout its data, nodes need to know about the energy level of its neighbours which requires dynamic adjustments of PEGASIS topology. Such topological adjustments, especially in high utilized networks, may introduce significant overhead. Another important drawback is the absence of

(39)

methods by which nodes determine its location in a network; it is assumed that all nodes maintain a complete database of the location of all other nodes in the network. Moreover, this protocol assumes that each sensor node can communicate with the sink directly and does not outline multi-hop communication to reach the sink. In addition, PEGASIS assumes that all nodes start with the same level of energy and consumption rates are equal. Also the single head of the chain can become a bottleneck and distant nodes may suffer from excessive delays. Finally, in terms of modelling, this approach also assumes that nodes are stationary.

An extension to PEGASIS is Hierarchical-PEGASIS proposed in [32] to decrease delay incurred for packets during transmission to the sink and suggests energy X delay metric to solve data gathering problem. Simultaneous data messages are utilized to reduce delay. Two approaches were studied to avoid possible collisions and signal interference: signal coding and spatial transmission. In the latter one only spatially separated modes are allowed to transmit at the same time. This chain-based protocol with CDMA capable nodes, constructs a chin of nodes that forms a tree like hierarchy and each selected node in a particular level transmits data to the node in the upper level of the hierarchy. This method ensures data transmitting in parallel and reduces the delay significantly. Since nodes are not aware of their neighbour's energy levels, Hierarchical-PEGASIS still require dynamic topological adjustment. Even though, they have performed better than standard PEGASIS by a factor of about 60.

3.3.3 TEEN

Threshold-Sensitive Energy Efficient Protocols [33] were proposed for time-critical application in which network operate in a reactive mode.

(40)

TEEN utilizes a hierarchical approach along with data-centric mechanism. It is very suitable for situations where the environment is sensed continuously but data transmission is done less frequently. The basic idea behind this approach is the grouping of closer nodes into clusters and this process goes on the second level until the sink is reached. Figure 6 redrawn from [33] shows TEEN networks architecture.

Figure 6: Hierarchical Clustering in TEEN & APTEEN

Cluster-heads broadcasts to its members a head threshold (which is the minimum possible value of the sensed attribute to be sent to cluster-head) and a soft threshold (which is a small change in the value of the sensed attribute that triggers the node to switch on its transmitter and transmit). Thus, the hard threshold tries to reduce the number of transmissions by enforcing nodes to transmit only when sensed attribute is in the range of interest. While the soft threshold reduces the numbers of transmissions by blocking all messages about little or no change in the sensed attribute. As a result, the user can control the trade off between energy efficiency and data accuracy; smaller value of the soft threshold gives more accurate

(41)

picture of the environment but consumes more energy on the other hand. Note that the user can change both threshold values as required but this is impractical for applications where periodic reports are needed, since the broadcast message may be lost consequently nodes will never communicate.

The Adaptive Threshold sensitive Energy Efficient sensor Network protocol (APTEEN) [34] is an enhancement version of TEEN that changes the periodicity or threshold value used in the TEEN protocol. The architecture is the same as TEEN but here the cluster-head broadcast four parameters:

 Attributes: set of physical parameters that user is interested with

 Thresholds: hard threshold and soft threshold

 Schedule: a TDMA schedule, assigning time slot to each node to transmit

 Count time: maximum time between two successive reports sent by a node

Once a node sense a change in some attribute value equal to or greater than the soft threshold or sense a value greater than the hard threshold it transmits data to cluster-head. Despite the improvements over TEEN, APTEEN has added additional complexity to implement new parameters (Attribute, Count Time, and Schedule). On the other hand, it offers more flexibility and combines proactive and reactive techniques. Simulation results have shown that both TEEN and APTEEN has outperformed LEACH [10]. In terms of energy dissipation and network lifetime, TEEN gives the best results while APTEEN is between LEACH and TEEN.

3.3.4 Self-organizing protocol

Subramanian et al. [35] describes not only a self-organizing protocol, but also application taxonomy. The proposed taxonomy was used to build

(42)

architecture and infrastructure components to support heterogeneous sensors. Some sensors, could be mobile or stationary, probe the environment and transmit messages to a designated set of stationary nodes that act as routers. Each node should be reachable by a router that forms the backbone for transmitting data to more powerful sink nodes. Also each sensing node was assumed to have unique identifier and they can be identified by the address of the router node to which they are connected. The routing architecture is hierarchical where groups of nodes are formed and merge when needed. Fault tolerance is achieved by using Local Markov Loops (LML) algorithm in broadcast trees. In this approach router nodes keep the entire sensor connected by forming a dominating set. The phase of self-organizing router nodes and initializing routing tables are:

 Discovery phase: discover neighbouring nodes

 Organization phase: groups are formed and merged where each node is identified by address or router node it is connected to. Routing tables and broadcast tree and built

 Maintenance phase: routing tables update messages are exchanged and broadcast trees are maintained by LML

 Self-reorganization phase: when node or partition failure occurs Since the sensor nodes can be addressed individually in the routing architecture, this approach is suitable for applications where communication to a particular node is required. Moreover, the cost for maintaining routing tables and keeping a balanced routing hierarchy is minimized which is one of the several strength points in this approach. As for broadcasting a message is less than that consumed in SPIN protocol [22]. However, this is not on-demand protocol which adds additional overhead in the organization phase of the algorithm. Furthermore, in case of many cuts in the network the hierarchy forming

(43)

will be every expensive because networks cuts increase the probability of applying reorganization phase.

3.4 Location-based protocols

In this category of protocols nodes are addressed by their location. The distance between nodes is measured on the basis of incoming signal strengths and used to estimate energy consumption. By estimating signals strengths, nodes can calculate relative coordinates of neighbouring nodes that can be utilized in routing data efficiently especially in the absence of standard addressing scheme[36]. Furthermore, nodes equipped with a low power GPS receiver can obtain their location directly by communicating with a satellite [37]. The location information could be used to efficiently diffuse quires to a certain region of a sensor network rather than flooding the entire networks. To save more energy, some location based schemes demand that nodes should go to sleep if there is no activity. Some protocols that was originally developed for mobile ad hoc networks are also applicable to sensor networks while others like Cartesian and trajectory-based routing [38] [39] are not. In this section we review the most prominent energy aware location-based protocols.

3.4.1 GAF

Geographic Adaptive Fidelity (GAF) [37] is an energy-aware location-based routing algorithm developed for mobile ad hoc networks but can be used in sensor networks. It can also be considered as a hierarchical protocol where the clusters are based on geographic location. The network area is divided into fixed zones to form a virtual grid. Each node associate itself with a zone in the grid based on its GPS-indicated location. Nodes within the same zone collaborate to save energy by turning off unnecessary nodes without affecting the level of routing

References

Related documents

Significant efforts were made to provide genomic and bioinformatic data as downloadable files, including raw sequencing reads, genome maps, gene annotations, protein functional

Experimental study on the effect of welding speed and tool pin profiles on AA6082-O aluminium friction stir welded butt joints, International journal of

BioMed CentralWorld Journal of Surgical Oncology ss Open AcceCase report Frontal skull craniotomy combined with moderate dose radiotherapy effectively ameliorate a rare case of non

EURASIP Journal on Applied Signal Processing 2005 4, 558?574 c? 2005 Hindawi Publishing Corporation Neural Network Based Smart Sensor Framework Operating in a Harsh Environment Jagdish

cultural texts proverbs and explicit a definite layer of culture of a separate ethnic group, reflect.. spiritual and physical activity of culture representatives,

Vice versa, as for the directive antenna elements, in order to obtain a radiation pattern in the azimuth plane with a HPBW of nearly 60 degrees without strongly impacting upon

A major source of this juice is French Polynesia where noni fruit puree constitutes one of the 45.. © 2018 by

The evolution of government options in Burkina Faso (reluctance in the face of total fee exemptions on a na- tional scale after the free care of the welfare state and adoption of