Abstract— The purpose of our work is to realize a load balancing of nodes in a P2P network. A replicamanagement protocol, which exploits replicas for balancing loads of each node managing popular data, by adapting replica partition trees to a balanced tree overlay network called BATON, for BAlanced Tree Overlay Network, is proposed for this. BATON has a load balancing mechanism where each node adjusts the number of data managed by it. However, if there are some popular data that are accessed by a large number of nodes in a network, the mechanism of BATON could fail. We propose a replicamanagement protocol for balancing loads of both data transmission and replicamanagement of each node. Some results of simulation in which our proposal was compared with a method without replica and another method of simple replicamanagement are showed and the effective and weak points of our proposal are discussed.
In this paper, a region based framework is proposed. The design of the framework is based on the centralized data replication management. Fig – 1 shows the framework. There is a region master responsible for triggering the replicamanagement. Group of sites are located on the same network region. A network region is a network topological space where sites are located closely. Each region has a region header. It is used to manage the site information in a region. The region master collects all information about accessed files from all region headers. Each site maintains the history of files accessed in that site. The access history gives the details of fileid, regionid and the time, which indicates that the file has been accessed by a site located in the region (regionid) at a particular time. At a regular time interval each site sends its access history to its region header. The region header maintains the detail of the number of times the file has been accessed in the particular region. Region header sends information to the region master at a constant time interval. The Region master maintains the global information regarding the file and the number of time it has been accessed. The selection of the popular file is based on the weights given to the file.
Generally data grid consists of two levels of services, low level services and high level services. We have tried to improve one component of low level services i.e. ReplicaManagement using knowledgebase representation. We are not aware of how the knowledge is represented in human mind till with use of knowledge representation techniques and methods, we are able to process our data forming the knowledgebase more over the implementation of ontological rules helps us to formulate an exhaustive and rigorous conceptual schema within a given domain, a typically hierarchical data structure containing all the relevant entities and their relationships and rules (theorems, regulations) within that domain.
However, despite the potential gains of cloud computing still lack in providing 100% up-time and ensured data availability to the business organizations .These issues may be very critical for business process that they might even lose data. Taking this problem we tried to give a reliability and availability based fault tolerance approach based on replica distribution in previous work , In this work we are trying to evaluate the process by implementing the scheme and get the statistical performance analysis of the approach. In this work we shown the results that we could extract from it and the replica decision that could be preferable when actually this type of situations occurs. By the analysis of the result we get to know that the approach we gave in to the previous work is useful for the cloud service providers to get rid of faulty conditions using the proposed fault tolerant policy.
Roam addresses scalability with a three-pronged attack. The Ward Model addresses issues of replicamanagement, consistency topologies, and update distribution. Dynamic algorithms and mechanisms handle the scalability of the versioning information, required for consistency maintenance. Finally, we address consistency itself with new algorithms and mobile-friendly semantics. In summary, Roam is a comprehensive replication system for mobile and non-mobile users alike. With it, users can truly compute while mobile, paving the way for both improved user productivity and new and unseen research along mobile computing avenues. The particular point in the solution space is one which we believe addresses a real problem and has wide applicability, not just for replicating files in a mobile context but more generally for a wide range of replication-related problems, including but not limited to military cases, software development scenarios, distributed database problems like airline reservation systems, and general-purpose distributed computing.
The Hadoop distributed file system (HDFS) is responsible for storing very large data- sets reliably on clusters of commodity machines. The HDFS takes advantage of rep- lication to serve data requested by clients with high throughput. Data replication is a trade-off between better data availability and higher disk usage. Recent studies propose different data replication management frameworks that alter the replication factor of files dynamically in response to the popularity of the data, keeping more rep- licas for in-demand data to enhance the overall performance of the system. When data gets less popular, these schemes reduce the replication factor, which changes the data distribution and leads to unbalanced data distribution. Such an unbalanced data distri- bution causes hot spots, low data locality and excessive network usage in the cluster. In this work, we first confirm that reducing the replication factor causes unbalanced data distribution when using Hadoop’s default replica deletion scheme. Then, we show that even keeping a balanced data distribution using WBRD (data-distribution-aware replica deletion scheme) that we proposed in previous work performs sub-optimally on heterogeneous clusters. In order to overcome this issue, we propose a heterogeneity- aware replica deletion scheme (HaRD). HaRD considers the nodes’ processing capabili- ties when deleting replicas; hence it stores more replicas on the more powerful nodes. We implemented HaRD on top of HDFS and conducted a performance evaluation on a 23-node dedicated heterogeneous cluster. Our results show that HaRD reduced execu- tion time by up to 60%, and 17% when compared to Hadoop and WBRD, respectively. Keywords: Hadoop distributed file system (HDFS), Replication factor, Replicamanagement framework, Software performance
Abstract: Cloud computing now a day's become most popular and reliable computing technique for organizations and individuals. In the cloud environments, data availability and backup replication are critical and complex issues in the an efficient fault tolerance policy is the major. Fault tolerance policy is the strategy in action when a fault occurs in the system. Taking backups is the one of the most usual solution of keeping the data safe out of these faulty conditions, But sometimes traditional schemes and backup servers might get into bottleneck conditions due to their replicamanagement schemes. CSP has to adopt a fault tolerance policy to ensure continues delivery of services. There are two main categories policies that are proactive and reactive fault tolerance, both have different advantages in different situations. Our research introduces a new approach as a fault tolerance policy, which is having characteristics of both the above mentioned policies. In our approach replica distribution and retrieval is the functional areas.
A hostile low-power technique, called voltage overscaling (VOS), was introduced in  to lower voltage beyond critical supply voltage without surrendering the throughput. However, VOS degrades signal-to-noise ratio (SNR). A novel (ANT) technique  combined VOS block with reduced-precision replica (RPR), which removes soft errors accurately and saves energy Some ANT deformation designs are proposed in – and the ANT design is further extended to system level in . Whereas, the RPR in the ANT designs of – are designed in a organized manner, which are not easily versatile.
We have two main questions that remain to be answered in the future work. The first issue is the question of the UV-completion of the factorized saddles away from the IR regime and consistency of the adopted regularization approach in the free energy. The second question is the stability of the discussed saddle points with respect to the replica-nondiagonal fluctua- tions. We expect that our findings regarding the non-trivial saddle points of the SYK model at strong coupling might be useful to gain new perspective into 2d quantum gravity via holog- raphy.
As moving toward low supply voltages in low-power SRAM designs, threshold and supply voltage variations will give larger impacts on the power characteristics and speed of SRAM. The techniques based on replica circuits which minimize the effect of operating conditions on the power as well as speed. Replica bitlines and memory cells are used to give a reference signal whose delay tracks that of the bitlines. This signal is used to generate the sense clock with minimal slack time and control wordline pulsewidths to limit bitline swings. We implemented the circuits for variants of the technique, using cell current rationing.
Inhalation of aerosol drugs is used extensively for treatment of respiratory system ailments. Recently the success of this methodology for the treatment of other diseases and rapid pain management has been researched. Current devices used for aerosol drug delivery have limited deposition efficiencies of 5 to 20% (Kleinstreuer et al., 2003). Since these aerosol drugs are sometimes aggressive and can harm healthy tissue, there is a need for an aerosol drug delivery system that has higher deposition efficiency. The innovative idea of injecting a small particle stream into only a small fraction of the airway inlet, and therefore controlling where particles deposit, was proposed. The proposed system revolves around a Smart Inhaler which will be able to control the particle release position, breathing pattern, and particle release time during the inhalation. Along with controlled particle release position, the other two key factors are: determining optimal aerosol characteristics (e.g. size, shape, and density) and controlling the waveform of the inhalation flow.
Upon receiving ok_lock from the primary, the underlying replica executes the following set of actions 1) Replica executes transaction procedure and updates the object in its local_store, 2) Replica triggers other replicas to update objects in their local_store, 3) Primary release locked objects, 4) Replica adds tid to commit_list and remove it from active_list, 5) Replica notifies client with trans_commit message. These are actions are shown in rule 4a.
Besides direct light sources we can use materials over the tank surface to obtain a replica of Earth albedo. We can change the illumination in the tank room so to have different responses using the same material. Presently, our room light sources are: a computer screen beside the tank and a remote controlled LED over the tank. Examples of used materials are: moss to mimic “green” regions such as forests, glass dust for highly-reflectant surfaces such as glaciers or snow, sand and bricks for land soils. Finally, it is possible to use two fans to create waves when the tank is filled with water, and white particles suspended in salty water to mimic clouds and fog.
Committing a block. Although an honest leader’s block will be uniquely certified, it is non-trivial for other honest replicas to find this out. Observe that a certificate requires only one honest signature (f + 1 signatures in total). If a Byzantine replica proposes and votes for a worse-ranked block, honest replicas must account for the possibility that this vote comes from an honest replica, in which case, a conflicting certificate may have formed and been kept secret by Byzantine replicas. Therefore, Dfnty uses a weaker condition called unique-extensibility to commit blocks.
However, we place some limits on the ability of the adversary to compromise nodes. We note that if the adversary can compromise major fraction nodes of the network, he will not need nor benefit much from the deployment of replicas. To amplify his effectiveness, the adversary can also launch a replica node attack, which is the subject of our Investigation. We assume that the adversary can produce many replica nodes and that they will be accepted as a legitimate part of the network. We also assume that the Attacker attempts to employ as many replicas of one or more compromised sensor nodes in the network as will be effective for his attacks. The attacker can allow his replica nodes to randomly move or he could move his replica nodes in different patterns in an attempt to frustrate our proposed scheme.
cycle from the highest temperature to the lowest. Figure 8 follows the temperature of a selected replica for 10 ns. The transitions among the temperatures follow a regular pattern, with rapid transitions between neighboring replicas at times when $ is near 0 and 1 followed by no transitions for a period of % / 2 when $ is between 0 and 1 ! % equals 0.2 ns". An analysis of all the data from the trpzip2 simulations shows that it takes 5 & 1 ns for a replica to move from 273 to 600 K, which is consistent with Fig. 8, where the replica goes from 273 to 600 and back in about 10 ns. This is at least as fast as conventional RE for the same system, as shown in Fig. 2 of Ref. 5. For the alanine dipeptide system, the REDS2 method takes 0.16& 0.02 ns to go from 300 to 600 K, about the same as conventional RE, which takes 0.18 & 0.02 ns. This value for RE is faster than the value of 0.8& 0.1 ns for the system reported previously. 22 That simu- lation attempted exchanges every 1 ps rather than 0.1 ps, as done here, demonstrating that smaller exchange frequencies may lead to more efficient sampling, as suggested elsewhere. 49 The replicas using REDS2 method move through temperature space about as fast as conventional RE, but with fewer replicas.
In the ANT technique, a replica of the MDSP but with reduced precision operands and shorter computation delay is used as EC block. Under VOS, there are a number of input dependent soft errors in its output ya [ n ]; however, RPR output yr [ n ] is still correct since the critical path delay of the replica is smaller than T samp. Therefore, yr [ n ] is applied to detect errors in the MDSP output ya [ n ]. Error detection is accomplished by comparing the difference | ya [ n ] − yr [ n ]| against a threshold Th . Once the difference between ya [ n ] and yr [ n ] is larger than Th , the output y ˆ[ n ] is yr [ n ] instead of ya [ n ]. As a result, y ˆ[ n ] can be expressed as