• No results found

Efficient Data Access Method in DTNs by Using NCL Selection Metric

N/A
N/A
Protected

Academic year: 2020

Share "Efficient Data Access Method in DTNs by Using NCL Selection Metric"

Copied!
10
0
0

Loading.... (view fulltext now)

Full text

(1)

1 Available online at www.ijiere.com

International Journal of Innovative and Emerging

Research in Engineering

e-ISSN: 2394 - 3343 p-ISSN: 2394 - 5494

β€œEfficient Data Access Method in DTNs by Using NCL

Selection Metric”

Miss. Sonal Kishanrao Bhoge1

a

, Prof.Mr. S.S. Hippargi2

b

aME Student 1, Electronics and telecommunication Department of N.B. Navale Sinhgad College Of Engineering, Solapur

university, Solapur, India

bAssistant Professor 2, , Electronics and telecommunication Department of N.B. Navale Sinhgad College Of Engineering,

Solapur university, Solapur, India ABSTRACT:

When mobile nodes move freely in Disruption Tolerant Networks (DTNs) then, network partition occurs. Mobile nodes in one partition are not able to access data hosted by nodes in other partitions due to connectivity problem and due to data access delay and lack of immediate query response, which degrade the performance of data access in network. DTNs are defined as network having low node density, unpredictable node mobility and also nodes in network having lack of global information. Due to that only intermittent network connectivity can established in DTNs. Therefore the subsequent difficulty of maintaining end to end communication links make it necessary to use carry and forward method for data transition. As of now most research efforts takes place only in DTNs which focus on data forwarding but limited work has been done on providing efficient data access to mobile users. In this paper, we propose a greatly improved energy efficiency strategy, so as to improve storing and fast accessing of data in cache and also it supports cooperative caching in DTNs. With this NCL (Network Central Locations) selection approach, we tried to overcome complications of estimating data transmission delay and data access delay by mobile nodes. So our extensive performance analysis proves the effectiveness of efficient data access in networks by NCLs.

Keywords: Disruption tolerant networks, Data access, Network Central Locations, Cooperative caching

I. INTRODUCTION

Disruption Tolerant Networks are also classified as opportunistic mobile node model. In opportunistic mobile node networks mobile nodes maintain the information about shortest opportunistic path between each other in the distance vector manner when they come in to contact. Data access can be provided to these nodes via cooperative caching without support from cellular network infrastructure, but only limited work has been done on maintaining the efficient data access [16]. In this type of Disruption tolerant networks, it is generally difficult to maintain end to end communication links among mobile nodes due to low node density, unpredictable node mobility and lack of global network information. Subsequently in this type of networks difficulty of maintaining end to end communication links makes it necessary to use carry and forward methods for data transmission [11]. So, node mobility is exploited to let mobile nodes carry data as relays and forward data opportunistically when contacting each other. The key problem is how to determine the appropriate relay selection or provenance selection to distribute data to the final destination node. Appropriate node design is needed to ensure that data can be promptly accessed by requesters in such cases. In DTNs all nodes does not need large storage capacity to store and forward data in the network. Examples of such networks are military fields, disaster recovery areas, and extreme terrestrial environments and also in VANETs. A vehicular ad-hoc networks or VANETs are networks having technology that uses moving vehicles as nodes in a network to create mobile networks [11]. VANET turns every participating vehicle in to a wireless router or node. In these applications a novel technique to improve the performance of data access in such type of network is caching i.e. to cache data at appropriate Network Central Locations (NCLs) based on query history, so that queries in the future can be responded with less delay.

(2)

2 network would either have to move at moderate speeds or increase its transmission power or transmission energy in order not to be isolated from the network. So the mobility of node is one of the main problem identified when routing is performed so we are using DSR protocol. As DSR routing protocol is used to give maximum or high response to energy in networks with varying node densities, so with cooperative caching technique and NCL selection we will force to reiterate its broadcasting events whenever there is a broken link or when the destination could not be found over a certain period of time. In a study [2], DSR and AODV were compared against each other the author highlighted the usefulness of promiscuous listening which allows DSR to possess a great amount of routing information compared to what AODV might have in its routing table. Therefore AODV need to perform more route discoveries compared to DSR in a similar environment. Comparatively with other schemes for data access, in this paper we propose to support cooperative caching in a fully distributed manner in DTNs, with heterogeneous node contact pattern and behavior.

The rest of paper organized as follows- In section II we briefly review related work. Section III provides caching strategies in different network environments, in Section IV concept of NCL and NCL caching technique, Section V gives NCL load balancing and impact of NCL load balancing, Section VI implies probabilistic data selection strategy and Section VII gives simulation results. Finally Section VIII concludes the paper.

II. RELATED WORK

Most of the research is done on Epidemic routing with lower forwarding cost based on prediction of node contacts in the future in DTNs [3]. Some later studies develop relay selection strategies to approach the performance of epidemic routing with lower forwarding cost based on the prediction of node contact in the future. Some schemes do such prediction based on their mobility patterns, which are characterized by Kalman filter and semi-markow chains [4]. Semi - markov process model based on time homogeneous to predict the probability distribution of the time contact. The aforementioned metrics for relay selection can be applied to various forwarding strategies, which differ in the number of data copies created in the network.

Caching is another way to provide data access. Cooperative caching in wireless ad hoc networks was studied in [5], in which each node caches pass-by data based on data popularity, so that queries in the future can be responded with less delay. Caching locations are selected incidentally among all the network nodes. Some research efforts [6], [7] have been made for caching in DTNs, but they only improve data accessibility from infrastructure network such as WiFi access points (APs) [7] or Internet[6]. Peer-to-peer data sharing and access among mobile users are generally neglected. Distributed determination of caching policies for minimizing data access delay has been studied in DTNs [8], [9], assuming simplified network conditions. In [8], it is assumed that all the nodes contact each other with the same rate. In [9], users are artificially partitioned into several classes such that users in the same class are identical. In [10], data are intentionally cached at appropriate network locations with generic data and query models, but these caching locations are determined based on global network knowledge. Comparatively, in this paper, we propose to support cooperative caching in a fully distributed manner in DTNs, with heterogeneous node contact patterns and behaviors.

III.CACHING STRATEGIES IN DIFFERENT NETWORK ENVIRONMENTS

The caching nodes or data source reply to the requesters who sends queries. The main difference between caching strategies in wireless ad-hoc networks and DTNs is illustrated in fig.1. Concept is illustrated by considering each node has limited buffer space. Otherwise, data can be cached everywhere and it is trivial to design different caching strategies. In wireless ad-hoc network the path from the requester to the data source remains unchanged during data access in most cases. Such assumptions enables by intermediate node on the path to cache the pass-by-data for example in fig.1. Fig. 1a, C forwards all the three queries to data sources A and B, and also forwards data d1 and d2 to the requesters. In case of limited cache space, C caches the more popular data d1 based on query history, and similarly data d2 are cached at node K. In general, any node could cache the pass-by data incidentally.

Figure.1. caching strategies for different network environment [11].

(3)

3 decision. For example, in Fig. 1b, after having forwarded query q2 to A, node C loses its connection to G, and cannot cache data d1 replied to requester E. Node H which forwards the replied data to E does not cache the pass-by data d1 either because it did not record query q2 and considers d1 less popular. In this case, d1 will be cached at node G, and hence needs longer time to be replied to the requester. Our basic solution to improve caching performance in DTNs is to restrain the scope of nodes being involved for caching. Instead of being incidentally cached β€œanywhere,” data are intentionally cached only at specific nodes. These nodes are carefully selected to ensure data accessibility, and constraining the scope of caching locations reduces the complexity of maintaining query history and making caching decision.

IV.CONCEPT OF NCL AND NCL CACHING TECHNIQUE

Figure.2 NCL function [14]

When node A generates query then server makes nearest NCL to reply for the query. If the buffer space is full of allocated NCL i.e. popular node in the network then it sends request to the nearest NCL as shown in fig.2. Hence, queries in the future will be accessed with less time delay in the future. In further section we will discuss how to select proper NCL.

A. NCL selection metric-

Network model for intentional caching can be obtained from opportunistic contacts in DTNs and described by a network contact graph 𝐺(𝑉, 𝐸), where the stochastic contact process between a node pair 𝑖, 𝑗 ∈ 𝑉 is modeled as an edge 𝑒𝑖𝑗 ∈ 𝐸.

We assume that node contacts are symmetric; i.e., node j contacts i whenever i contacts j, and the network contact graph is, therefore, undirected. The characteristics of an edge 𝑒𝑖𝑗 ∈ 𝐸 are determined by the properties of inter contact time

among nodes. Similar to previous work [12], [13], we consider the pairwise node inter contact time as exponentially distributed. Contacts between nodes 𝑖 and 𝑗 then form a Poisson process with contact rate πœ†π‘–π‘—, which is calculated in real time from the cumulative contacts between nodes 𝑖 and 𝑗 because the network starts. In the rest of this paper, we call the node set {𝑗||πœ†ij > 0} βŠ† V as the contacted neighbors of 𝑖.

We first define multihop opportunistic connection on network contact graph 𝐺 = (𝑉, 𝐸).

In further subsection we will discuss about opportunistic path selection and we will also focus on the selecting NCL with the assumptions of complete network information from the global perspective. Afterwards, we propose distributed

NCL selection methods that efficiently approximate global selection results and can operate on individual nodes in an autonomous manner.

B. Definition (Opportunistic path). A r-hop opportunistic path 𝑃𝐴𝐡= (𝑉𝑃, 𝐸𝑝) between nodes A and B consists of a

node set 𝑉𝑃= {𝐴, 𝑁1, 𝑁2, … … , π‘π‘Ÿβˆ’1, 𝐡} βŠ‚ 𝑉 and an edge set 𝐸𝑃= {𝑒1, 𝑒2, … … . . π‘’π‘Ÿ} βŠ‚ 𝐸 with edge weights

{πœ†1, πœ†2, … … . , πœ†π‘Ÿ}.Path weight 𝑝𝐴𝐡(𝑇) is the probability that data are opportunistically transmitted from A to B along

𝑃𝐴𝐡 within time T. An opportunistic path is illustrated in Fig. 3. the inter contact time π‘‹π‘˜ between nodes π‘π‘˜ and π‘π‘˜+1

on 𝑃𝐴𝐡 follows exponential distribution with probability density function (PDF)

…………(1)

Hence, the time needed to transmit data from A to B is

………..(2)

following a hypoexponential distribution [15], such that

(4)

4 where, the coefficients

From (3), the path weight is written as

…………(4)

and the data transmission delay between two nodes A and B, indicated by the random variable Y , is measured by the weight of the shortest opportunistic path between the two nodes. In practice, mobile nodes maintain the information about shortest opportunistic paths between each other in a distance-vector manner when they come into contact.

Figure.3 Opportunistic path. [11]

The metric Ci for a node i to be selected as a central node to represent a NCL is then defined as follows:

………(5)

Where, we define that 𝑝𝑖𝑖(𝑇)= 0. This metric indicates the average probability that data can be transmitted from a

random node to node i within time T. From (5), it is obvious that the value of Ci decreases exponentially when T decreases.

C. Global Selection-

When global network knowledge about the pairwise node contact rates and shortest opportunistic paths among mobile nodes are available, central nodes representing NCLs can be selected sequentially by the network administrator before data access. Let INC denotes the set of selected central nodes; every time the node in V \ INC with the highest metric value is selected as the next central node, until the required K central nodes are selected. In particular, we exclude the set INC of existing central nodes from calculating Ci in (5), i.e.

………(6)

By doing so, we ensure that the selected central nodes will not be clustered on the network contact graph. The parameter T used in (6) is determined by the average node contact frequency.

D. Distributed selection-

When global network knowledge is unavailable, a node maintains information about pairwise contact rates and shortest opportunistic paths to other nodes via opportunistic contacts. According to Definition in sub section 4.2 previously of the opportunistic path, Lemma 1[11] formally shows that the distributed maintenance of opportunistic paths in DTNs cannot be done in an iterative manner. Due to the uncertainty of opportunistic data transmission in DTNs, the broadcasting range of a particular node may not cover the entire network. Such broadcasting range in DTNs can be formally bounded by the lemma 2 [11]. Hence based on lemma 2 [11] we have following theorem which evaluates the occurrence of probability of inconsistency between NCLs.

Theorem 1- After a broadcasting period T, the probability that the NCL selections made by two arbitrary nodes A and B in the network are inconsistent is no larger than 1-(1-2p(1 βˆ’ 𝑝))π‘˜, where K is the predefined number of NCLs, and

……….(7)

Where,

(5)

5 p also applies to node B. As a result 1-2 p(1-p), provides an upper bound on the probability that the NCL selections made by nodes A and B are inconsistent on any Ni for 1 ≀ i ≀ K, and this theorem is therefore proved.

E. Caching technique-

In this section, we present our cooperative caching scheme. Our basic idea is to intentionally cache data at a set of NCLs, which can be promptly accessed by other nodes. Intentional caching is as shown in fig.4 as follows-

Figure.4. Intentional caching [11] Figure.5. Architecture Design of caching scenario [14]

In our scheme, when a data source generates data, it pushes data to central nodes of NCLs, which are prioritized to cache data. One copy of data is cached at each NCL. If the caching buffer of a central node is full, another node near the central node will cache the data. Such decisions are automatically made based on buffer conditions of nodes involved in the pushing process. A requester multicast a query to central nodes of NCLs to pull data, and a central node forwards the query to the caching nodes. Multiple data copies are returned to the requester, and we optimize the tradeoff between data accessibility and transmission overhead by controlling the number of returned data copies. Utility-based cache replacement is conducted whenever two caching nodes contact and ensures that popular data are cached nearer to central nodes. We generally cache more copies of popular data to optimize the cumulative data access delay. We also probabilistically cache less popular data to ensure the overall data accessibility. Picture of our proposed scheme is illustrated in Fig. 4. Each NCL is represented by a central node, which corresponds to a star in Fig. 4. The push and pull caching strategies conjoin at the NCLs. The data source S actively pushes its generated data toward the NCLs, and the central nodes C1 and C2 of NCLs are prioritized for caching data. If the buffer of a central node C1 is full, data are cached at another node A near C1. Multiple nodes at a NCL may be involved for caching, and a NCL, Note that NCLs may be overlapping with each other, and a node being involved for caching may belong to multiple NCLs simultaneously. A requester R pulls data by querying NCLs, and data copies from multiple NCLs are returned to ensure prompt data access. Particularly, some NCL such as C2 may be too far from R to receive the query on time, and does not respond with data. In this case, data accessibility is determined by both node contact frequency and data lifetime. Nodes in DTNs are well motivated to contribute their local resources for caching data because the cached data provide prompt data access to the caching nodes themselves. As illustrated by Fig. 4, since the central nodes representing NCLs are prioritized for caching data, the closer a requester is to a central node, the sooner its queries are responded by the corresponding NCL. The delay for responding to queries generated from central nodes is, obviously, the shortest.

Architecture design of caching scenario shows detailed operation of NCL and how cooperative caching scheme works. As shown in fig.5.DTNs is mainly having characteristics of disturbed environment which is mostly unstable and mobility of nodes effects on connectivity and unfortunately creates network partitions. So that, network central location (NCL) is designed for easy accessibility to other nodes and to provide global distribution of faithful information all over the network . It also maintain the record of contact among users with trace based validation. The performance of NCL can be analyzed with data transmission delay.

(6)

6 Figure.6. Determination of caching location [11].

Central node C1 is able to cache data, but data copies to C2 and C3 are stopped and cached at relays R24 and R33, respectively, because neither C2 nor R34 has enough buffer to cache data. Note that the caching location at a NCL may not be the contacted neighbor of a central node, like the case of nodes R33 in Fig. 6. From this strategy, it is easy to see that the set of caching nodes at each NCL forms a connected sub graph of the network contact graph at any time during data access.

V. NCLLOAD BALANCING

From the caching scheme proposed in previous scheme, we can see that the central nodes play vital roles in cooperative caching in DTNs. First, the central nodes cache the most popular data in the network and respond to the frequent queries for these data. Second, the central nodes are also responsible for broadcasting all the queries they receive to other caching nodes nearby. However, such functionality may quickly consume the local resources of central nodes that include their battery life and local memory. In addition, we would like our caching schemes to be resilient to failures of central nodes. In this section, we focus on addressing this challenge, and propose methods that efficiently migrate the functionality of central nodes to other nodes in cases of failures or resource depletion. In general, the methods we present in this section can be used to adjust the deployment of central nodes at runtime, such as adding or removing central nodes according to up-to-date requirements on caching performance.

A. Adjustment of caching location-

After a new central node is selected, the data cached at the NCL represented by the original central node needs to be adjusted correspondingly, so as to optimize the caching performance. For example in Fig. 7, after the functionality of central node C1 has been migrated to C3, the nodes A, B, and C near C1 are not considered as good locations for caching data anymore. Instead, the data cached at these nodes needs to be moved to other nodes near C3.

Figure.7. NCL load balancing [11]

This movement is achieved via cache replacement when caching nodes opportunistically contact each other. Each caching node at the original NCL recalculates the utilities of its cached data items with respect to the newly selected central node. In general, these data utilities will be reduced due to the changes of central nodes, and this reduction moves the cached data to the appropriate caching locations that are nearer to the newly selected central node. Changes in central nodes and subsequent adjustment of caching locations inevitably affect caching performance as shown in Fig. 7. However, this performance degradation will be gradually eliminated over time by the opportunistic cache replacement.

B. Impact of NCL load Balancing-

(7)

7 NCL load balancing on the caching performance is largely determined by the specific network condition and data access pattern.

VI.PROBABILISTIC DATA SELECTION STRATEGY

The major reason is that according to our network modeling in the data accessibility does not increase linearly with the number of cached data copies in the network. More specifically, the data accessibility will increase considerably if the number of cached data copies increases from 1 to 2, but the benefit will be much smaller if the number increases from 10 to 11.

In such cases, for the example shown in Fig.8, caching d1 at node A may be ineffective because the popular d1 may already be cached at many other places in the network. In contrast, removing d6 out from the cache of node B may greatly impair the accessibility of d6 because there may be only few cached copies of d6 due to its lower popularity. In other words, the basic strategy of cache replacement only optimizes the cumulative data access delay within the

Figure.8 Probabilistic data selection with cache replacement [11]

local scope of the two caching nodes in contact. Such optimization at the global scope is challenging in DTNs due to the difficulty of maintaining knowledge about the current number of cached data copies in the network, and we instead propose a probabilistic strategy to heuristically control the number of cached data copies at the global scope.

The algorithm for this strategy is mentioned below-

Algorithm 1- Probabilistic Data selection at node A among the data set S.

1. π‘–π‘šπ‘–π‘›= arg π‘šπ‘–π‘›π‘–{π‘ π‘–βƒ’π‘‘π‘–βˆˆ 𝑆, π‘₯𝑖== 0}

2. While S β‰  βˆ… & & 𝑆𝐴> π‘ π‘–π‘šπ‘–π‘› do

3. π‘‰π‘šπ‘Žπ‘₯= πΊπ‘’π‘‘π‘€π‘Žπ‘₯(𝑆, 𝑆𝐴)

4. 𝑆′= 𝑆

5. While S’ β‰  βˆ… & & π‘‰π‘šπ‘Žπ‘₯> 0 do

6. π‘–π‘šπ‘Žπ‘₯= π‘Žπ‘Ÿπ‘”π‘šπ‘Žπ‘₯𝑖{π‘’π‘–βƒ’π‘‘π‘–βˆˆ 𝑆′}

7. if SelectData (π‘‘π‘–π‘šπ‘Žπ‘₯) == π‘‘π‘Ÿπ‘’π‘’ & & π‘‰π‘šπ‘Žπ‘₯ β‰₯ π‘ π‘–π‘šπ‘Žπ‘₯ π‘‘β„Žπ‘’π‘›

8. π‘₯π‘–π‘šπ‘Žπ‘₯= 1

9. S = S\π‘‘π‘–π‘šπ‘Žπ‘₯

10. 𝑆𝐴= π‘†π΄βˆ’ π‘ π‘–π‘šπ‘Žπ‘₯, π‘‰π‘šπ‘Žπ‘₯ = π‘‰π‘šπ‘Žπ‘₯βˆ’ π‘ π‘–π‘šπ‘Žπ‘₯

11. S’ = S’ \π‘‘π‘–π‘šπ‘Žπ‘₯

12. π‘–π‘šπ‘–π‘›= arg π‘šπ‘–π‘›π‘–{π‘ π‘–βƒ’π‘‘π‘–βˆˆ 𝑆, π‘₯𝑖== 0

Where, GetMax(S,𝑆𝐴) calculates maximum possible value of the item in the knapspack vai dynamic programming Select

Data(π‘‘π‘–π‘šπ‘Žπ‘₯) whether to select data to cache at node A by conducting Burnoulli experiment with probability π‘’π‘–π‘šπ‘Žπ‘₯

VII.SIMULATION RESULTS

We evaluate the performance of our proposed caching scheme by comparing it with the following schemes on the basis of simulation and graphical analysis method as shown in figures and graphs as follows.

. No Cache or no hope - where caching is not used for data access and each query is only responded by data source. . Random Cache- In which every requester caches the received data to facilitate data access in the future.

(8)

8

(a) (b) Figure .9. Creation of wireless node (a), (b) respectively

10(a) 10(b)

Figure.10. Remaining energy in Random cache and cache data 10(a), Remaining energy in no hope or no cache and cache data 10(b)

11(a) 11(b)

Figure.11. Overhead in Random cache and cache data 11(a), Overhead in no hope or no cache and cache data 11(b)

12(a) 12(b)

(9)

9

13(a) 13(b)

Figure.13. PDF of packet size in Random cache and cache data 13(a), PDF of packet size in no hope or no cache and cache data 13(b)

VIII. CONCLUSION

In this paper, we propose a novel scheme to support cooperative caching in DTNs. Our basic idea is to intentionally cache data at a set of NCLs, which can be easily accessed by other nodes. We ensure appropriate NCL selection based on a probabilistic metric; our approach coordinates caching nodes to optimize the tradeoff between data accessibility and caching overhead. Extensive simulations show that our scheme greatly improves the ratio of queries satisfied and reduces data access delay, when being compared with existing schemes.

ACKNOWLEDGEMENT

The author would like to thank all staff and coordinator Electronics and telecommunication Department of N.B. Navale Sinhgad College Of Engineering, Solapur. for their valuable guidance and support.

REFERENCES

[1] Royer, E., P. Melliar-Smith and L. Moser, 2001. Ananalysis of the optimum node density for ad hocmobile networks. Proceedings of the IEEEInternational Conference on Communications,(ICC’01), Helsinki, Finland, pp: 857-861. DOI:10.1109/ICC.2001.937360

[2] Das, S.R., C.E. Perkins and E.M. Royer, 2000.Performance comparison of two on-demandrouting protocols for ad hoc networks. Proceedings of the IEEE 19th Annual Joint Conference of theIEEE Computer and

Communications Societies, (CCS’00), IEEE Xplore Press, Tel Aviv, pp: 3-12.DOI: 10.1109/INFCOM.2000.832168

[3] A. Vahdat and D. Becker. Epidemic routing for partially connected adhoc networks. Technical Report CS-200006, Duke University, 2000.

[4] Q. Yuan, I. Cardei, and J. Wu. Predict and relay: an efficient routing indisruption-tolerant networks. In Proc. MobiHoc, pages 95–104, 2009.

[5] L. Yin and G. Cao, β€œSupporting Cooperative Caching in Ad Hoc Networks,” IEEE Trans. Mobile Computing, vol. 5, no. 1, pp. 77-89,Jan. 2006

[6] M.J. Pitkanen and J. Ott, β€œRedundancy and Distributed Caching in Mobile DTNs,” Proc. ACM/IEEE Second Workshop Mobility in the Evolving Internet Architecture (MobiArch), 2007.

[7] Y. Huang, Y. Gao, K. Nahrstedt, and W. He, β€œOptimizing File Retrieval in Delay-Tolerant Content Distribution Community,”Proc. IEEE Int’l Conf. Distributed Computing Systems (ICDCS),pp. 308-316, 2009.

[8] J. Reich and A. Chaintreau, β€œThe Age of Impatience: Optimal Replication Schemes for Opportunistic

Networks,” Proc. ACM Fifth Int’l Conf. Emerging Networking Experiments and Technologies (CoNEXT), pp. 85-96, 2009.

[9] S. Ioannidis, L. Massoulie, and A. Chaintreau, β€œDistributed Caching over Heterogeneous Mobile Networks,” Proc. ACM SIGMETRICS Int’l Conf. Measurement and Modeling of Computer Systems, pp. 311-322, 2010. [10]W. Gao, G. Cao, A. Iyengar, and M. Srivatsa, β€œSupporting Cooperative Caching in Disruption Tolerant

Networks,” Proc.Int’l Conf. Distributed Computing Systems (ICDCS), 2011.

[11]β€œCooperative Caching for Efficient Data Access in Disruption Tolerant Networks” Wei Gao, Member, IEEE, Guohong Cao, Fellow, IEEE,Arun Iyengar, Fellow, IEEE, and Mudhakar Srivatsa, Member, IEEEIEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 13, NO. 3, MARCH 2014

[12]A. Balasubramanian, B. Levine, and A. Venkataramani, β€œDTN Routing as a Resource Allocation Problem,” Proc. ACM SIGCOMM Conf. Applications, Technologies, Architectures, and Protocols for Computer Comm., pp. 373-384, 2007.

(10)

10 [14]B.Mallika, M.Parimala COOPERATIVE CACHING FOR TIMELY AND SECURE DATA ACCESS IN

DISRUPTION TOLERANT NETWORKS, Department of Computer Science, DSCAS (W), Perambalur (India) ,International Journal of Advanced Technology in Engineering and ScienceVolume No 03, March 2015

[15]S.M. Ross, Introduction to Probability Models. Academic, 2006.

Figure

Figure.10. Remaining energy in Random cache and cache data 10(a), Remaining energy in no hope or no cache                              10(a)                                                                                                    10(b) and cache data 10(b)

References

Related documents

The new designs show a significant improvement in the polarisation change caused by the liquid crystal structure but with the strip electrode (figure 3(b)) there is a

In terms of health care utilization, only children in single father families have higher odds of forgoing a well-child checkup than children in married couple families, after

RESEARCH Open Access Small bowel involvement is a prognostic factor in colorectal carcinomatosis treated with complete cytoreductive surgery plus hyperthermic intraperitoneal

Additionally, there are a selection of works on data hiding in the encrypted url. In a buyer- sellerwatermarking protocol [6], the seller of digitalmultimedia

In the case where none of the deleterious mutations accumulate, the classical background selec- tion scenario, we recover the expected prediction: The reduction in genetic diversity

Keywords: Erythrocyte transfusion; blood transfusion; unexpected alloantibodies; alloimmunisation; sickle cell disease; Benin City;

Furthermore, touch screen phones appeared to harbor more Staphylococcus overall than the other two phone types (P = 0.03).. epidermidis is usually