Unlike research HTM proposals, the first-generation HTM systems from IBM [39, 70] and Intel  impose significant restrictions, which limit their suitability for lock- free programming. These “best effort” HTMs [12, 45] do not guarantee progress for arbitrary transactions: a transaction attempt will fail if it (a) attempts to access too many distinct locations; (b) executes for longer than a scheduler quantum of the operating system; or (c) attempts to perform an unsupported operation, such as a system call. Transaction attempts can also fail due to memory accesses that conflict with concurrent operations from transactions, or accesses that conflict with concurrent nontransactional code. This property, called “strong atomicity” , is a natural outcome of implementing HTM through the cache coherence protocol. It also allows for clever composition of transactional and nontransactional code [12, 16, 72]. Even if these limitations did not exist, it is unlikely that HTM could ever fully replace the best concurrentdatastructure implementations. As recently reported by Gramoli , concurrentdata structures implemented directly from synchronization primitives (i.e. CAS) tend to provide the best performance in comparison to those implemented by using locks or transactions.
previously been shown that a large class of lock-free algorithms actually behave like wait-free algorithms in practice . The question we raise is whether blocking implementations can also behave in a practically wait-free manner. One would expect a negative answer given the frequent use of mutual exclusion and the varied conditions under which these algorithms must run. As we show in Chapter 5, the answer is however yes for search data structures. To be able to obtain this answer, we ﬁrst introduce a set of performance metrics capable of capturing wait-free behavior in practice: the absence of signiﬁcant delays due to concurrency. We then look at state-of-the-art implementations of concurrentdata structures under a large variety of workloads and conditions. The conclusion we draw is that blocking search data structures exhibit virtually no signiﬁcantly delayed requests. We also give a theoretical intuition for the result, through an analogy with the birthday paradox. The most problematic case for blocking algorithms can be frequent interrupts, in particular context switches. We show how best-effort Hardware Transactional Memory (HTM) can be leveraged to allow blocking algorithms to maintain practically wait-free behavior in this case. By changing only the locking code to use HTM, and by taking advantage of the seeming inconvenience of hardware transactions aborting upon interrupts, threads virtually never hold locks while preempted. In summary, our work helps programmers to better determine when a blocking implementation is sufﬁcient for their needs, thus potentially simplifying the task of implementing a concurrentdatastructure to a substantial degree.
Consistency may be trivially achieved using mutual exclusion locks that serialize the access to the entire datastructure, also called coarse-grained lock- ing. However, it severely limits the concurrent operations. Even if the number of locks increase, i.e. fine-grained locking, they are still vulnerable to pitfalls such as deadlock, priority inversion and convoying. An alternative approach is lock-free implementation. In a lock-free concurrentdatastructure, at least one non-faulty processing thread is guaranteed to complete its operation in a finite number of steps. Effectively, lock-free data structures foster both scala- bility and progress guarantee. A stronger progress guarantee is wait-freedom, which ensures that all the non-faulty processes finish their operations in a finite number of steps. However, most often wait-freedom results in poor performance. Another approach to implement consistent concurrentdatastructure is using software transactional memory (STM) . However, the performance of such implementations largely depends on the design of the STM. Unsurprisingly, us- ing STM to design concurrentdata structures has often resulted in unacceptable performance .
This paper presented a new datastructure that is well- suited for linear structure-based shared documents (such as text documents) in collaborative editors. To ensure a high degree of concurrency, we proposed a new technique to uniquely identify elements inside the shared document. These identifiers are simply real numbers which are manipulated under a precision control in order to avoid the problem of infinite precision. This technique is quite simple and guarantees the uniqueness of these identifiers. According to two (strong and lazy) forms of happened- before relation, we proposed two concurrency control procedures: the first procedure allows the concurrent operations to be executed in either order. The second one enables us to extend the concurrency even for operations generated by the same user as our identifiers are unique and totally ordered. We performed a performance evaluation which shows that our unique identifier scheme is well-suited for linear datastructure. In future work, we intend to investigate the impact of our work when undoing operations. Furthermore, we plan to extend our unique identifier scheme to other data structures such as trees and graphs.
Big data often have even less metadata than usual databases and that's a problem when the data scientist wants to perform analyses on these data. The use of our DQM tool would help the data scientist in recognizing data types (integer, dates, strings) and data semantics (Email, FirstName, Phone). The semantics would then be useful to automatically suggest views on data with a semantic meaning or to find matches between heterogeneous struc- tures in big data.
In addition to having to recognize the file type, because indexes are based around collections of characters (typically words), adding items to an index is only meaningful when it is possible to identify strings of symbols as discrete words or ‘related sequences of characters’ within a block of source data. That entails not only being able to properly decode all the file types in the data to be indexed, but also managing to identify the words or related sequences of characters contained in those files. This causes problems when faced with foreign languages that do not contain word breaks, i.e. where words do not necessarily have white space characters between them.
Hierarchical regression analyses were performed for the continuous and categorical outcome variables (Tables 4 and 5). In each analysis, groups of predictor variables were added to models by using forward stepwise variable selection (SPSS 11.5). The models were constructed to determine the predictive effects of psychopathology (concurrent depres- sion or anxiety symptoms) at intake on addiction treatment outcome over and above factors that are clearly known to influence outcome. Previ- ous research has indicated that the time spent in treatment is a predictor of abstinence after treatment (30,31). Thus the number of days spent in treatment was used as a covariate in all regression analyses. The order of entry of predictor variables in the re- gression analyses was as follows: days in treatment was entered in the first step, demographic variables (age and
D IVISION OF H IGH S CHOOL R ELATIONS
In response to the popularity of the EXCELerate program and subsequent robust growth of concurrent enrollment offerings in the TCC service area, TCC created a High School Relations Division charged with providing academic oversight, administration, and strategic planning for concurrent enrollment. Administrative decisions are based on NACEP Accreditation Guidelines and review of NACEP Standards for Accreditation. Since August 2012, actions by the Dean of High School Relations include: Faculty Liaisons: The Dean recruited full-time faculty as liaisons to support faculty adjuncts teaching at high school sites. Faculty Liaisons provide academic oversight by observing one or more class sessions at the site as well as checking syllabi and grading rubrics used for determining student grades. Faculty Liaisons, who have taught the class, also provide insight for classroom management issues and empower adjuncts to create a collegiate environment at the site. Regular monthly meetings with liaisons are held to review site visit and adjunct instructor evaluations. Meeting topics address communication challenges between institutions as well as process improvement. TCC has begun supporting EXCELerate faculty through training and orientations which are provided each semester. Approximately 60 attendees participated at the most recent session. Information packets are disseminated to faculty, Faculty Liaisons, and Associate Deans that included TCC policy, CEP guidelines, and high school site-specific information. The Dean of High School Relations in collaboration with the TCC faculty association concurrent enrollment committee continues to improve support for quality and rigor through the implementation of the Concurrent Enrollment Partnership (CEP) guidelines (see appendix H).
Relativistic programming does not define a spe- cific method of synchronization between writers. In the simplest case, writers may synchronize via coarse-grained locking; if writes occur sufficiently infrequently compared to reads, this may suffice. To improve concurrency, writers may opt for fine- grained locking instead, partitioning the data struc- ture into independent pieces and applying a lock to each. Typically, these pieces will remain indepen- dent for readers as well, making any ordering con- cerns between writers moot; however, if readers may potentially read addresses written simultaneously by multiple writers, and the ordering of those write op- erations matters, the writers must arrange an appro- priate barrier between their write operations.
Memcached requires the ability to scale to various workload sizes at run- time; as a result, it requires a resizable hash table. Previous non-resizable RCU hash tables could not provide the flexibility necessary for memcached. I implemented a new relativistic storage engine in memcached, and mod- ified memcached to support a new fast path for the GET request. mem- cached’s default implementation goes to great lengths to avoid copying data when servicing a GET request; memcached also services multiple concurrent client connections per thread in an event-driven manner. As a result of these two constraints, memcached maintains reference counts on each key-value pair in the hash table, and holds a reference to the found item for a GET from the time of the hash lookup to the time the response gets written back to the client. In implementing the relativistic storage engine, I chose instead to copy the value out of a key-value pair while still within a relativistic reader; this allows the GET fast path to avoid interaction with the reference-counting mechanism entirely. The GET fast path checks the retrieved item for poten- tial expiry or other conditions that would require mutating the store, and falls back to the slow path in those cases.
Placing critical data in the hands of a cloud provider should come with the guarantee of security and availability for data at rest, in motion, and in use. Several alternatives exist for storage services, while data confidentiality solutions for the database as a service paradigm are still immature. We propose a novel architecture that integrates cloud database services with data confidentiality and the possibility of executing concurrent operations on encrypted data. This is the first solution supporting geographically distributed clients to connect directly to an encrypted cloud database, and to execute concurrent and independent operations including those modifying the database structure. The proposed architecture has the further advantage of eliminating intermediate proxies that limit the elasticity, availability, and scalability properties that are intrinsic in cloud-based solutions. The efficacy of the proposed architecture is evaluated through theoretical analyses and extensive experimental results based on a prototype implementation subject to the TPC-C standard benchmark for different numbers of clients and network latencies. KEYWORDS: Infrastructure-as-a-Service (IaaS); Platform-as-a-Service (PaaS); and Software-as-a-Service (SaaS); Fork-Join-Causal consistency (FJC)
The main conclusion we can draw from this paper is that, prac- tically, one can achieve the behavior of wait-free CSDS algorithms in terms of individual thread progress with blocking implementa- tions. The nature of search data structures is such that there is no single contention point in the structure, rendering locks less prob- lematic than they might be for structures such as concurrent queues, stacks, and counters. Our conclusions only concern CSDSs: we do not claim blocking implementations of objects such as queues and stacks are practically wait-free. It is also important to note that we do not claim that every blocking CSDS algorithm is prac- tically wait-free. Rather, we considered state-of-the-art blocking algorithms, which generally have synchronization-free reads and writes with minimal and extremely fine-grained synchronization. We find such practically wait-free algorithms for each data struc- ture we study. In addition to showing that these algorithms are prac- tically wait-free, this work also represents the first detailed quan- tification of the effects of a progress guarantee on the behavior of practical CSDSs. Moreover, we have shown how new technologies such as Intel TSX can be leveraged to provide the desired perfor- mance characteristics of CSDSs. We also note that while the ex- periments presented in this paper were run on an Intel Xeon server, we have verified our conclusions on other architectures as well, in- cluding servers from AMD and Oracle.
Smartphone users are sharing a lot more information, personal information about activities, and what they are doing through social media, such as Twitter and Facebook. With the popularity of smartphones, social media are becoming more prevalent and emerged as a new way of life to the people (Hathi, 2009). IBM (2013) reported more than two billion Internet users, 4.6 billon mobile phones, more than 500 million Facebook users, and about 340 millions of data every day in Twitter. These phenomena create new term which is called ‘Big Data’. Originally it is used for the large volume of data, but expands its scope to variety and velocity. It is not only meaning data itself, but also new paradigm. It drives to develop new methods and technologies for obtaining, processing, and analyzing unprecedented data (i.e., unstructured data). Big Data pose grand challenges as opportunities to advance business intelligence; for example, new computing architecture for processing Big Data, interoperability issues for sharing and managing Big
In a time when monetary budgets and staff availability are decreasing while the need for restoration is increasing, making land management and restoration decisions can often be difficult and overwhelming. Where should we focus our efforts? How can we be efficient and effective? How do we know our management is enhancing the natural ecosystem and benefiting native flora and fauna? Utilizing available tools and technology, land managers can easily make data-driven, intentional and impactful management decisions. Learn how Mt. Cuba Center’s Natural Lands staff prioritize, monitor, collect, and analyze data using ArcGIS to inform and guide future habitat restoration and management actions on 535 acres of preserved land in the Delaware Piedmont. Natural Lands Manager Nathan Shampine will both demonstrate examples of how his team has chosen reforestation sites and show how they have developed a long term vegetation survey to track changes over time.
are not achieved until the corrupted content is used; the attacks are not detected until corrupted buffers are checked by a monitor thread. Like DieHarder, Cruiser provides probabilistic heap safety for this kind of attacks. Assume an attack takes time E to achieve the exploit after canary smashing and a cruise cycle takes time C, if E ≥ C, Cruiser can detect the overflow before the exploit is achieved; otherwise, assume the detection latency (elapsed time since the overflow until detection) is uniformly distributed on the interval (0 , C], the probability P for Cruiser to detect an exploit be- fore it completes is E/C. As shown in Table 3, with Lazy Cruiser, 5 of the 12 benchmarks’ average cruise cycles are shorter than 0.5 µs, and 8 of them are not longer than 0.5ms; with Eager Cruiser, 5 benchmarks’ average cruise cycles are not longer than 1.6 µs, and 7 of them are shorter than 0.2ms. The average cruise cycles for Apache test are 16 µs and 78µs for Lazy and Eager Cruisers, re- spectively. Considering C = NT , where N is the number of nodes in the CruiserList and T is the average time for a monitor thread to check a node, we can expect a high prevention probability by keep- ing N small. One way is to divide the CruiserList into several seg- ment groups and create the same number of monitor threads, each of which cruises over a shorter part of CruiserList. For example, by dividing the CruiserList into two parts and running one more mon- itor thread on either part in omnetpp test, the average cruise cycle decreased by 42% and 47% for Lazy Cruiser and Eager Cruiser, respectively. Another way is to only monitor suspicious buffers, for example, those buffers involved in data flows that stem from net- works or user inputs.
With the rapid growth of Internet, the advancement of technology and reduced cost of electronic components, more and more number of users are using the mobile data access and data transfer by using various network interfaces for the devices like laptop, notebook, tablet and smart-phone using various wireless technologies like 802.11, Bluetooth, GSM,3G, WiMax etc. The existing wireless technologies differ in terms of services provided like bandwidth, coverage, price, quality of service support. If there is a restriction on the usage of these available resources with interfaces on the user device as one interface at a time, then imposes limitation on the flexibility and better utilization. So by using multiple interfaces simultaneously, can improve quality and provide support for applications requiring high bandwidth . Further delay can be reduced when alternate path of communications are kept alive enhancing the reliability of data. Heterogeneous Wireless Network (HWN) is a wireless communication network where Internet services can be accessed through multiple wireless technologies like WiFi, WiMAX, GSM etc. Nowadays many of the Internet applications are demanding high bandwidth. The bandwidth of an individual technology is not sufficient to meet the current demand. Hence by aggregating the individual low bandwidth links, form a high speedy larger logical link. Bandwidth aggregation in heterogeneous wireless network will provide many of benefits for real time applications.
Concurrency libraries can facilitate the development of multi- threaded programs by providing concurrent implementations of familiar data types such as queues or sets. There exist many opti- mized algorithms that can achieve superior performance on mul- tiprocessors by allowing concurrentdata accesses without using locks. Unfortunately, such algorithms can harbor subtle concur- rency bugs. Moreover, they require memory ordering fences to function correctly on relaxed memory models.