distributed hash tables (DHTs)

Top PDF distributed hash tables (DHTs):

Proximity Induced Labelling Schemes for Distributed Hash Tables

Proximity Induced Labelling Schemes for Distributed Hash Tables

Peer-to-Peer (P2P) systems have been recently put forward as an unconventional approach to computer networking. The flexibility and robustness of structured P2P systems (also known as Distributed Hash Tables or DHTs) like CAN [17], Chord [22], Pastry [20] and Tapestry [29] have been demonstrated in recent papers. In the pure form, none of these overlay structures have information about the underlying physical topology. The ability to incorporate physical topology information into P2P systems would be crucial during the real-world deployment of these systems in various areas such as file sharing, application layer multicast and content distribution networks. As research in P2P systems has matured over the years, attempts have been made to include information about the physical topology into the overlay geometry. These attempts can be classified into three different categories, as mentioned in [9]:

67 Read more

Authentication in stealth distributed hash tables

Authentication in stealth distributed hash tables

The original goal of our Stealth DHT proposal was to provide a distinction between nodes of greater and lesser capabilities as a means of improving routing performance. Powerful nodes were responsible for handling message for- warding within the DHT, whereas the remaining, weaker nodes simply requested services from them. We have demon- strated that this separation can be extended to incorporate both verifiably trustworthy and potentially untrustworthy nodes. By selectively limiting the privileges of untrustworthy nodes on the network, on an individual basis if required, we can accordingly limit the numerous security problems asso- ciated with supplying service to them. By further augment- ing our approach with a suitable Public Key Infrastructure to enforce the separation between node types, we have shown how a Stealth DHT can be used to supply a secure, resilient overlay that caters to both trustworthy and untrust- worthy nodes simultaneously. Stealth DHTs do not neces- sarily need to deny access to potentially untrustworthy nodes as opposed to previous approaches that addressed security issues in DHTs. Instead, the operations that such nodes can perform need only be selectively limited. A Stealth DHT coupled with the authentication mechanisms described could therefore form an ideal secure location sub- strate for the numerous varied applications that DHTs can support.

12 Read more

A Symmetric Load Balancing Algorithm with Performance Guarantees for Distributed Hash Tables
Dr K S Vijaya Simha & Yarramsetty Naga Savitri

A Symmetric Load Balancing Algorithm with Performance Guarantees for Distributed Hash Tables Dr K S Vijaya Simha & Yarramsetty Naga Savitri

table (DHT) may host numbers of virtual servers and are enabled to balance their loads in the reallocation of virtual servers. Most decentralized load balance algorithms that are designed for DHTs based on virtual servers do require the participating peers to be asymmetric, where some serve as the rendezvous nodes to pair virtual servers and participating peers, thereby introducing another load imbalance problem. The state-of-art studies introduce significant algorithmic overheads and guarantee no rigorous performance metrics. Here, a novel symmetric load balancing algorithm for DHTs is presented by having the participating peers approximate the system state with histograms and cooperatively implement a global index. Each peer independently does reallocate in our proposal its locally hosted virtual servers by publishing and inquiring the global index based on their histograms. Unlike competitive algorithms, our proposal exhibits analytical performance guarantees in terms of the load balance factor and the algorithmic convergence rate, and introduces no load imbalance problem due to the algorithmic workload. Through computer simulations, we show that our proposal clearly outperforms existing distributed algorithms in terms of load balance factor with a comparable movement cost.

9 Read more

Authentication in stealth distributed hash tables

Authentication in stealth distributed hash tables

Several works have also considered how such authen- tication systems may be implemented in a physically dis- tributed fashion over peer-to-peer networks. For example, Aberer et al. discussed how a completely decentralised PKI based on a statistical approach could be deployed on many traditional DHTs (although they specifically use P- Grid [14]) [2]. The key difference in comparison with our work is that in this case, the authors consider a method that can function with a network consisting entirely of poten- tially untrustworthy nodes. However, they note that their system breaks down if more than 25% of nodes are actu- ally malicious, and that it may not function with several DHTs, such as CAN or Chord [15][22]. We, however, be- lieve that our system is implementable on any existing DHT, and should function regardless of the percentage of mali- cious stealth nodes.

8 Read more

Content-based addressing in hierarchical distributed hash tables

Content-based addressing in hierarchical distributed hash tables

hotspot problem can be identified. This is because the peer that contains the selected popular file should in turn be burdened with the most request based load on the network. This is clearly observed in the data in Figure 6.1, by looking at the four protocols presented for comparison. The baseline protocol DHT exhibits the highest load for a single peer, this load increases along with rising requests. This was expected as under a DHT based network, a file’s responsible peer does not change under various load conditions; this causes whichever peer that initially gains responsibility for this highly sought file to undertake all requests for this resource. This peer has no way to redistribute this load, save for leaving and rejoining the network under a new hash id and thus a new location in the network. A virtual server derived DHT fares little better than plain Chord in this example; this is because a virtual server DHT attempts to distribute the load by better distributing address space. This does not work for the hotspot problem as a hotspot is represented in a DHT by an individual location on the network ring, this means no matter how much the address space for the network is divided it cannot divide this element into multiple locations to be handled by multiple peers. This is mainly because both plain DHT and virtual server strategies distribute load based upon the distribution of multiple files rather than distributing the actual load incurred by any particular file. This methodology differs from protocols using replication strategies and the CBDHT protocol as these methods deal directly with the distribution of load for a specific resource rather than a vague un-quantified collection of files. Thus as expected a DHT that uses a replication strategy does in fact have a lower max peer load than the previous protocols. However it does not have a lower load than CBDHT, this is mainly because of one fundamental issue with replication strategies. Replication strategies are only effective if they are included into frequented routing paths for the popular replicated file. This causes replication to have a load bias for those nodes in the routing path, which in the case of DHTs are typically peers that have hashes that come before the desired resource in a counter clockwise fashion around the address ring as described in the Chord background of Section II. The closer a peer is to a desired resource the more likely this peer will be chosen as a replication spot as well as being more likely to actually serve the request once a replication has been placed on it versus a peer that resides further from the desired resource. CBDHT however does not carry this bias thus it has the ability to distribute regardless of a specific peers hash placement. This allows CBDHT to achieve the performance depicted by spreading the load out as equally as possible between interested nodes.

26 Read more

Small world networks, distributed hash tables and the e resource discovery problem

Small world networks, distributed hash tables and the e resource discovery problem

A network that is supposed to support large amounts of data and varying numbers of concurrent users, possibly in the millions, has to scale well with the current requirements. Distributed peer-to-peer based systems are more flexible than centralised networks and much cheaper to maintain. One such system is the open source Distributed Hash Table (DHT) Bamboo [47], which we use in modified form for our overlay network. The Bamboo DHT does not require any centralised services. Any node in the system acts as a gateway for new nodes. The available resources grow with the system size, since every participant is not only a consumer but also a provider of storage space and can act as a router for messages that are not addressed to itself.

16 Read more

IJCSMC, Vol. 3, Issue. 4, April 2014, pg.1375 – 1379 RESEARCH ARTICLE A Self Destructing Data System Based on Active Storage Framework for Protecting Data Privacy from attackers UN agency

IJCSMC, Vol. 3, Issue. 4, April 2014, pg.1375 – 1379 RESEARCH ARTICLE A Self Destructing Data System Based on Active Storage Framework for Protecting Data Privacy from attackers UN agency

A pioneering study of Vanish [1] provides a replacement plan for sharing and protective privacy. Within the Vanish system, a secret key is divided and keep during a P2P system with distributed hash tables (DHTs). With connection and exiting of the P2P node, the system will maintain secret keys. Per characteristics of P2P, when concerning eight hours the DHT can refresh each node. With Shamir Secret Sharing algorithmic program [2], once one cannot get enough elements of a key, he won't decipher knowledge encrypted with this key, which suggests the secret is destroyed.

5 Read more

An Adaptive Multi-level Hashing Structure for Fast Approximate Similarity Search

An Adaptive Multi-level Hashing Structure for Fast Approximate Similarity Search

Although several MAMs have been proposed to speed up similarity queries, most of them are either affected by the well-known “curse of dimensionality” or suffer from overlapping among regions. Some studies have shown that the idea of data representation with hyper spherical or rectangular region hierarchies can deteriorate similarity queries even compared with sequential search [B¨ ohm et al. 2001]. Different approaches have been studied to solve the “curse of dimensionality”. One of the research lines is to try to avoid the dimensionality problem by relaxing the query precision to speed up the query time. Potentially, this approach is feasible for applications that do not require exact answers and for which speed is more important than search accuracy. Moreover, the metric space definition already leads to an approximation of the true answer, and thus a second approximation at search time may be acceptable [Ch´ avez et al. 2001]. In this direction, Locality Sensitive Hashing (LSH) [Datar et al. 2004] is one of the recent hash-based techniques proposed to organize and query high-dimensional data. Indeed, LSH is one of the few techniques that provide solid theoretical analysis and predictable loss of accuracy in the results. To answer similarity queries, LSH searches only regions, which are represented by buckets, to which the query object is hashed (i.e., the candidate buckets containing the dataset objects with a high probability of similarity to the query object). Therefore, there is no need to fully explore the index data, and only the objects into the candidate buckets require further processing. However, some drawbacks of LSH have not been solved entirely, as follows:

16 Read more

Novel and efficient Authentication Scheme for IoE in Smart Home Environment

Novel and efficient Authentication Scheme for IoE in Smart Home Environment

The review focuses mainly around different security imperfections in smart home, impacts and proposing countermeasures to the recognized issues fulfilling a large portion of security necessities. The paper is condensed dependent on the parameters like mutual validation, lightweight schemes, impervious to assaults, distributed nature, trust and access control arrangement. Considering the real difficulties of security in the Smart home IoE the novel authentication proposed which are lightweight and assault safe for circulated nature of Smart Home IoE and it can securely be executed even in low cost objects with thought of calculation overhead, storage overhead and correspondence overhead for the performance analysis. REFERENCES

5 Read more

A Semantic Approach of ITS: Perspective on Road Safety

A Semantic Approach of ITS: Perspective on Road Safety

When a search fails, we can then insert the key into the hash table. Insertion proceeds as follows: if the slot was empty, a new 12 byte array is allocated and assigned. The first 4 bytes in the array are reserved to store the number of entries (which is initialized to 1). The next 8 bytes store the key (4 bytes) followed by its payload data, a 4 byte integer. Otherwise, we resize the existing array by 8 bytes (4 bytes for the key and 4 bytes for its payload data) using the realloc system call. The key and its payload are then appended to the array and the number of entries in the array is incremented by 1, completing the insertion process.

9 Read more

Decentralization of a Multi Data Source Distributed Processing System Using a Distributed Hash Table

Decentralization of a Multi Data Source Distributed Processing System Using a Distributed Hash Table

able to transmit data. Each node in the system has the same uniform structure, shown in Figure 1 and contains the following logical elements: DHT routing table, set of data sources, fz container, state indicator, network con- nection and two queues: messages in and messages out. Both queues are using FIFO mechanism and algorithms for dropping outdated elements. For the sake of simplicity in this paper ‘processing’ is interpreted as computing, although it can also mean processing the control functions, gathering real-time data, etc. There are Z computational tasks in the system: . These tasks are issued to the system by nodes. The node that enables a task for computation is called the task owner, and is denoted by (0 otherwise). Tasks contain massive amounts of digital data. The key idea of distributed processing is to process these kinds of tasks by using many computational units. Thus, each task is divided into blocks:

8 Read more

Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

To clear up which are the capabilities and the limitations of the proposed system, Figure 1 shows an example of mali- cious tampering with an audio signal. This demonstration has been carried out on a piece of audio speech, with a length of approximately 2 seconds, read from a newspaper by a speaker. The whole recording, which is about 32 seconds long, has also been used as a proof of concept to present some experimental results on the system in Section 7. Figure 1(a) shows the original waveform, which corresponds to the Italian sentence “un sequestro da tredici milioni di euro” (a confiscation of thirteen million euros). This sentence has been tampered with in order to substitute the words “tredici milioni” (thirteen million) with “quindici miliardi” (fifteen billion), see Figure 1(b). In order to compute the hash, as explained in Section 4, we compute a coarse-scale perceptual time-frequency map of the signal (in this case, with a temporal resolution of 1/4 seconds). From the received tampered waveform and from the information of the hash, the user is able to identify the tampering (Figure 1(d)).

12 Read more

Distributed  Smooth  Projective  Hashing   and  its  Application  to  Two-Server  PAKE

Distributed Smooth Projective Hashing and its Application to Two-Server PAKE

We extend this line of work by considering divergent parametrised languages in one smooth projective hash function that allows multiple parties to jointly evaluate the result of the function. We propose the notion of (distributed) extended smooth projective hashing that enables joint hash computation for special languages. Further, we propose a new two-server password authenticated key exchange framework using the new notion of distributed smooth projective hashing and show how it helps to explain the protocol from [13]. Actually, the authors of [2] already built a group PAKE protocol using smooth projective hashing in a multi-party party protocol. However, they assume a ring structure such that the smooth projective hashing is only used between two parties.

17 Read more

Detection of Node Clones in Wireless Sensor Network Using Detection Protocols

Detection of Node Clones in Wireless Sensor Network Using Detection Protocols

Sensor nodes lack tamper-resistant hardware and are subject to the node clone attack. So two distributed detection protocols are presented: One is based on a distributed hash table, which forms a Chord overlay network and provides the key-based routing, caching, and checking facilities for clone detection, and the other uses probabilistic directed technique to achieve efficient communication overhead for satisfactory detection probability. While the DHT-based protocol provides high security level for all kinds of sensor networks by one deterministic witness and additional memory-efficient, probabilistic witnesses, the randomly directed exploration presents outstanding communication performance and minimal storage consumption for dense sensor networks. From the analysis and simulation results, the randomly directed exploration protocol outperforms all other distributed detection protocols in terms of communication cost and storage requirements, while its detection probability is satisfactory, higher than that of line- selected multicast scheme.

6 Read more

Cost efficient overflow routing for outbound ISP traffic

Cost efficient overflow routing for outbound ISP traffic

The operation of SAPOR can be summarised as follows: Packets are classified to identify microflows and if packets belong to microflows that already exist, they are forwarded on the same interface that the first packet saw in the scheme. If packets belong to a new flow and tokens are available for this destination, the outgoing interface is chosen on the ba- sis of the original forwarding table. If no tokens are avail- able the overflow-forwarding table is consulted and pack- ets are forwarded on alternative interfaces. All subsequent packets that belong to the same microflow will use the same interface for the duration of the microflow. The number of tokens that are available for one interface determines the number of microflows that are allowed. SAPOR uses the original BGP routing table to determine the default routing and uses alternative routing tables for overflow traffic.

7 Read more

Keyless  Signatures'  Infrastructure:  How  to  Build  Global  Distributed  Hash-Trees

Keyless Signatures' Infrastructure: How to Build Global Distributed Hash-Trees

Hash Trees: Hash-tree aggregation technique was first proposed by Merkle [6] and first used for digital time-stamping by Haber et al [5]. Hash-tree time-stamping uses a one-way hash function to convert a list of documents into a fixed length digest that is associated with time. User sends a hash of a document to the service and receives a signature token—proof that the data existed at the given time and that the request was received through a specific access point. All received requests are aggregated together into a large hash tree; and the top of the tree is fixed and retained for each second (Fig. 1). Signature tokens contain data for reconstructing a path through the hash tree—starting from a signed hash value (a leaf) to the top hash value. For example, to verify a token y in the place of x 2 (Fig. 1), we first concatenate y with x 1 (retained

9 Read more

A survey paper on Secure Hash Based Distributed De-duplication Systems

A survey paper on Secure Hash Based Distributed De-duplication Systems

In this paper creators, recognize assaults that endeavor customer side deduplication, permitting an assailant to obtain entrance to subjective size documents of different clients taking into account a little hash marks of these records. All the more particularly, an aggressor who knows the hash mark of a document can persuade the capacity benefit that it possesses that record, thus the server lets the assailant download the whole document.

5 Read more

Network Security by Detecting Clone Node Using RSA Bitmap and DHT

Network Security by Detecting Clone Node Using RSA Bitmap and DHT

FAROO is a worldwide web search engine depends on peer-to-peer technology. It uses a distributed crawler that stores search data on users' computers instead of a central server. Whenever a consumer visits a website, it is routinely indexed and distributed to the network. Ranking is complete by comparing usage statistics of users, such as web pages visited, amount of occasion exhausted on each page, and check out whether the pages were bookmarked or printed.[8] Coral Cache

5 Read more

Hashxplorer A Distributed System for Hash Matching

Hashxplorer A Distributed System for Hash Matching

The most shocking big news that broke out in crypto 2004 conference that one md5 hash got cracked. And after this event, one more news broke out that SHA1 also been cracked. This became a great concern to all researchers of the cryptography field. As SHA1 got cracked, the percentage of increased risk is limited. So, the impact of the hash cracking is not extensive. Cases that are influenced by the event of hash cracking are password authentication, Forensic Tool Utilizations. When it comes about the password, that can be leaked or stolen. But the chances of this gets decreased if the user renews it frequently or if the storage of the password gets relocated. People used to consider SHA1 most secure. But later research team from China and the USA have got out that it was a thousand times weaker than it was thought to be. The point is that cracking of SHA1 is not to tell us about the weakness of SHA1 but to remind that the development and strengthening of our existing security mechanism should get speed up [24]. 5.7 Cracking More Password Hashes with Patterns [25] It is one of the common mistakes that is done by application developer that they store user passwords in databases either as plaintext or as unsalted hash values. Many hacking attempts get successful because of this reason. To seize hashes password attackers follow brute force, dictionary or rainbow table attacks to disclose the plain text passwords from their hashes. Dictionary attacks are the fastest way for cracking hash among this three-way. But it has an insufficient success rate. However, this success rate can get increased by applying new methods. There are ten patterns of password that can get cracked easily. These are called the identified pattern. These ten patterns are Appending, Prefixing, Inserting, Repeating, Sequencing, Replacing, Reversing, Capitalizing, Special-format, and Mixed Patterns. The password that is delicate are the threats for the authentication system. Attackers can use different attack method to crack hashes password, notably unsalted hashes. Security experts increased security awareness for strong passwords. Authentication system rules force the user to generate a strong password. But the same pattern password is used by most of the user. These same pattern password can be identified as well as it can get cracked easily. A pattern-based attack has been designed which upgrade dictionary attacks. This upgraded dictionary attacks can be considered as the new creation of dictionary attacks [25]. 5.8 Choosing Best Hashing Strategies and Hash Functions [26]

7 Read more

The Usefulness of Multilevel Hash Tables with Multiple Hash Functions in Large Databases

The Usefulness of Multilevel Hash Tables with Multiple Hash Functions in Large Databases

The technique employed on network is to access the address of IP at quick time. Due to the increase in use of network and the desire of the users to access information at short time, attempt is made to make sure the system can access a given IP address very fast. A random number is generated in form of IP addresses from 0:0:0:0 to 125:125:125:125. These numbers serve as keys to the hash functions at-level zero, one and two hash table. Figure 4 and 5 illustrate the time to lookup for a given key. For example, looking at figure 4 with 2,000 records, it takes13203 milliseconds to lookup for a particular key at zero-level, 7250 milliseconds at one-level while 6219 milliseconds at two-level for the same key. In figure 5 with 10,000 data records, it takes 40328 milliseconds to search for a given key at zero-level, 13203 milliseconds at one-level and 8391 at two- levels. The same analysis is also applied to data records of 500 and 5,000. Results indicate that it is very fast to lookup for a given key at one-level hash table than zero-level while it is faster to search for a particular key at two-level than one- level. This indicates that multilevel hash tables with multiple hash functions can be employed to lookup for IP address in internet environment.

10 Read more

Show all 10000 documents...