Random testing is a basic software testing technique that can be used to assess the software reliability as well as to detect software failures. Adaptive random testing has been proposed to enhance the failure-detection capability of random testing. Previous studies have shown that adaptive random testing can use fewer test cases than random testing to detect the rst software failure. In this paper, we evaluate and compare the performance of adaptive random testing and random testing from another perspective, that of code coverage. As shown in various investigations, a higher code coverage not only brings a higher failure-detection capability, but also improves the effectiveness of software reliability estimation. We conduct a series of experiments based on two categories of code coverage criteria: structure-based coverage, and fault-based coverage. Adaptive random testing can achieve higher code coverage than random testing with the same number of test cases. Our experimental results imply that, in addition to having a better failure-detection capability than random testing, adaptive random testing also delivers a higher effectiveness in assessing software reliability, and a higher condence in the reliability of the software under test even when no failure is detected.
22 Read more
Adaptive random testing (ART) was proposed to enhance the failure detec- tion capability of random testing (RT) by evenly spreading test cases over the input domain. In many ART algorithms, besides the random generation of the program inputs, some test case selection criteria are additionally used to ensure an even spread of random test cases. Though even spread is intuitively simple, there does not exist a standard definition of even spread, needless to say the existence of a standard measurement for the evenness of test case dis- tribution. Research (Chen et al., 2007b) has been attempted to use various distribution metrics to reflect, if not measure, how evenly an ART algorithm spreads test cases. Previous studies have conclusively shown that some ART algorithms, which are not regarded by certain distribution metrics as evenly spreading their test cases, usually perform poorly. This correlation between test case distribution and failure detection capability has motivated us to develop some new ART algorithms, which apply these distribution metrics, mainly discrepancy and dispersion, as test case selection criteria in ART.
42 Read more
Adaptive Random Testing (ART), an enhancement of Random Testing (RT), aims to both randomly select and evenly spread test cases. Recently, it has been observed that the effectiveness of some ART algorithms may deteriorate as the number of program input pa- rameters (dimensionality) increases. In this paper, we analyse various problems of Fixed-Sized-Candidate-Set ART (FSCS-ART) (one ART algorithm) in the high dimensional input domain setting, and study how FSCS-ART can be further enhanced to address these problems. We propose that FSCS-ART algorithm incorporates a filtering pro- cess of inputs to achieve a more even-spread of test cases and better failure detection effectiveness in high dimensional space. This solution
39 Read more
Adaptive Random Testing (ART) has recently been proposed to enhance the failure-detection capability of Random Testing. In ART, test cases are not only randomly generated, but also evenly spread over the input domain. Various ART algorithms have been developed to evenly spread test cases in different ways. Previous studies have shown that some ART algorithms prefer to select test cases from the edge part of the input domain rather than from the centre part, that is, inputs do not have equal chance to be selected as test cases. Since we do not know where the failure-causing inputs are prior to testing, it is not desirable for inputs to have different chances of being selected as test cases. Therefore, in this paper, we investigate how to enhance some
43 Read more
thus the more significantly the software reliability may be improved. Neither the operational pro- file nor the uniform distribution makes use of any information about the probability distribution of failure-causing inputs. Therefore, RT has often been criticized to be likely to have a poor fail- ure detection capability. Recently, motivated by the observation that failure-causing inputs are clustered into contiguous failure regions, Chen et al. proposed adaptive random testing (ART) to enhance the failure detection capability of RT. The basic principle of ART is to evenly spread random test cases over the input domain. Many ART algorithms randomly generate test case can- didates according to uniform distribution, like RT in the context of debug testing. But they further use some criteria to identify test cases among candidates so as to ensure an even spread of executed test cases. There have been studies to enhance ART by distributing test cases more evenly, but all of them have adopted the approach of enhancing the test case identification process.
35 Read more
Some people may regard the role of diversity as overstated, or as little more than common sense — expected, taken for granted, and not really worth talking about or investigating seriously. In contrast, this paper has presented strong objective evidence to support its importance. The strengths gained from diversity — as seen in so many fields and disciplines — should mean that having diversity as a guiding principle in software testing is very philosophically appealing, in view of the fact that programmers can make many kinds of errors. Such a guiding principle may enable a shift in how some software testing research has been viewed, encouraging new insights and perspectives, potentially leading to better and stronger approaches. An example of this is the inherent ability of RT to generate a relatively diverse set of test cases due to randomness, but at a low cost. This reasonably easily obtained diversity is simple, yet e↵ective (as proven by theoretical analysis ), and may serve to ensure that random testing will never be obsolete. Indeed, enhanced versions of random testing, such as adaptive random testing, should draw increased research attention and merit further development. Furthermore, ART involves a simple form of diversity — arguably the simplest form — and we suggest that other forms of diversity should be considered in the design of new testing methods. In fact, although ART’s spatial diversity was inspired by the common occurrence of failure-causing input contiguity — and this spatial diversity remains an integral, defining characteristic of ART — ART itself has inspired other kinds of diversity-based testing approaches and research.
The failure-detection effectiveness of random testing (RT) can be improved by evenly spreading the test cases across the input domain. Adaptive random testing (ART) is a fam- ily of algorithms that achieve this notion of an even spread. Many ART algorithms, however, have high computational overhead, which may reduce their cost-effectiveness in test- ing, and thus affect their use in practice. Quasi-random testing (QRT) was proposed as an enhancement to the failure-detection effectiveness of RT, while maintaining the computational overhead at linear order. In QRT, test cases are generated based on quasi-random sequences, which are sets of points with low discrepancy and low dispersion. However, the two randomization methods used in the original QRT have some shortcomings, including that one does not introduce much randomness into the sequences, and that the other does not support incremental test case generation.
14 Read more
Previous research into random testing has aimed at dis- tributing test cases more evenly to reduce the time it takes to find faults. Adaptive Random Testing (ART)  maximises the distance between existing test cases and Restricted Ran- dom Testing (RRT)  sets up an exclusion zone around each test case. Quasi-random sequences  and lattices  have also been investigated. There are some limitations to this technique. Chen and Merkel  have proven that no strategy can find a fault in less than half the test cases as a purely random strategy without using information about the behaviour of the software. Arcuri and Briand  have shown that the added expense involved with these techniques can sometimes outweigh the benefits.
11 Read more
included the organization of secure direct computer based GRE forms (i.e., paper-and-pencil forms managed by means of computer), to assess examinee acceptance also, comfort with the computer delivery system. This first stage is seen as temporary, with a move to the second stage, administration of adaptive tests that are comparable in content to the paper-and- pencil types of the test. During this stage, paper-and- pencil and adaptive testing will exist at the same time, with scores from both sorts of examinations used in the graduate school admissions-process. In the third and extreme stage, paper-and-pencil testing will be suspended and all testing will be done on the computer by means of the adaptive process.
Abstract: Learning objects are pedagogic software components which are interoperable, exchangeable and reusable between web-based learning environments, and adaptive learning and testing can provide each student with personalized learning content or assessment questions. In this paper, we describe our existing adaptive authoring tool which can be used to convert a non-adaptive course into an adaptive one which uses learning objects. Learning material can be reused in our framework which consists of lesson instructions, pre-tests, performance tests and proficiency tests. Our current metadata for describing the learning material will be merged with a simplified and customised version of Learning Object Metadata to allow the import and export of learning objects between different learning environments.
showed that these methods are efficient for sample sizes, n, substantially larger than the observation dimension, K, but the χ 2 test heavily relies on the knowledge of the covariance structure and Hotelling’s T 2 becomes respectively inapplicable and inefficient if n is respectively smaller or larger but close to K. We underlined that this loss of efficiency, also shown in our application examples, is due to the search for effects in every direction of the multivariate space and we argue that this loss can be avoided if prior information about the direction of the effect is available. This can be practically implemented by linear combination tests where the targeted direction is selected using the weighting vector. The latter reduces the observation vectors to scalar linear combinations which are then used to construct z and t test statistics. The available approaches for linear combination testing were discussed, including their shortcoming of requiring specific effect structures, such as the uniform, to attain high power performance.
194 Read more
Given our game-centric view of proofs of knowledge we can extend the approach to adaptive proofs of knowledge. An adaptive proof is simply a proof scheme where the extractor can still win if the prover is given multiple turns to make proofs. The adaptive part is that the game hands the extractor’s witness in each turn back to the prover before the prover must take her next turn. Should a prover be able to produce a proof for which she does not know the witness, she could then use the extractor’s ability to find a witness to help make her next proof. The intuition is essentially the same for the cases with and without simulation soundness. We first introduce adaptive proofs formally without simu- lation soundness using so-called n-proofs, where n is a parameter describing the number of rounds the prover plays. In a later step we add simulation soundness. Adaptive proofs and n -proofs. Let (P , V ) be a proof scheme for a relation R. An adaptive prover P b in the ROM is a component that can make two kinds of queries, repeatedly and in any order. The first are random oracle queries; these are self-explanatory. The second are extraction queries which take a statement and a proof as parameters. The response to an extraction query is a witness. (Correctness conditions will be enforced by the game, not the prover.) Adaptive provers may also halt. For example, a non-adaptive prover can be seen as an adaptive prover that halts after its first extraction query.
56 Read more
Traditional tests are not targeted and not able to accurately test the real level of the testees. As the adaptive test can solve the problems, now it attracts attention and has significant development in the field of education testing. CAT was first introduced to the United States and has been used in various fields such as the Graduate Record Examination, the Graduate for Management and Administration Test, and the Nurse National Committee License Test. In China, it started late. But in recent years, there have been some adaptive tests and successful applications. For example, the Shanghai TV University has adopted the CAT test design method in the application of computer application capacity test project "VB6.0 program design". In the beginning of the 19 centuries, the CET-4 and CET-6 had also been committed to CAT research and development.
Several works have been proposed to improve the per- formance of random access, with a particular attention to machine-to-machine communication (small low duty cycle packets). Reference  investigates a resource allo- cation scheme for spatial multi-group random access to reduce packet collision in a single cell and interference among multiple cells. Authors in  proposes a col- lision resolution method for random access based on the fixed timing advance information for fixed-location devices in LTE/LTE-A. Authors in  introduce a code- expanded method for random access in LTE/LTE-A, where the amount of available contention access resources is expanded to reduce the collision rate. Reference  investigates a cooperative random access class barring scheme for global stabilization and access load sharing. In the proposed method, each groups are assigned with specific access class barring to differentiate access prior- ities. Authors in  present a prioritized random access scheme to provide quality of service (QoS) for differ- ent classes, where different access priorities are achieved through different backoff procedures. Reference  points out some possible directions for controlling the overhead of random access, namely access class bar- ring schemes, separate random access channel (RACH) resource for machine-type device, dynamic allocation of RACH resource, specific backoff scheme for the machine- type devices, slotted access, and pull-based scheme. Splitting the random access preambles into two (non- )overlapping sets, one for human type communication and the other for the machine type communication are proposed by 3GPP  as a mean to control the collision rate.
15 Read more
The first contribution of this work is a construction of wavelet decomposition of Ran- dom Forests (Breiman 2001), (Biau and Scornet 2016), (Denil et. al. 2014). Wavelets (Daubechies 1992), (Mallat 2009) and geometric wavelets (Dekel and Leviatan 2005), (Alani et. al 2007), (Dekel and Gershtansky 2012), are a powerful yet simple tool for con- structing sparse representations of ‘complex’ functions. The Random Forest (RF) (Biau and Scornet 2016), (Criminisi et. al. 2011), (Hastie et. al. 2009) introduced by Breiman (Breiman 2001), (Breiman 1996), is a very effective machine learning method that can be considered as a way to overcome the ‘greedy’ nature and high variance of a single decision tree. When combined, the wavelet decomposition of the RF unravels the sparsity of the underlying function and establishes an order of the RF nodes from ‘important’ components to ‘negligible’ noise. Therefore, the method provides a better understanding of any con- structed RF. This helps to avoid over-fitting in certain scenarios (e.g. small number of trees), to remove noise or provide compression. Our approach could also be considered as an alternative method for pruning of ensembles (Chen et. al. 2009), (Kulkarni and Sinha 2012), (Yang et. al. 2012), (Joly et. al. 2012) where the most important decision nodes of a huge and complex ensemble of models can be quickly and efficiently extracted. Thus, instead of controlling complexity by restricting trees’ depth or node size, one controls complexity through adaptive wavelet approximation.
38 Read more
Channel filter coefficients are collected to start with. 8 datasets need all together about 80 ms (1.6 Kb/s), allow- ing 12 reseedings a second, which would only rarely be needed. By hybridizing in samples of a free running counter, additional randomness is gained and the safety improves against HW-based attacks trying to influence the channel filter coefficients. 4 LS bits of each 8 sets of 11 channel filter coefficients, together with the counters, give 384 raw seed bits, used in two halves as XSEED values, in two iterations of the FIPS-186-2 generator. The generator for cryptographic random number speci- fied in the FIPS-186-2 document  was used with SHA1 as hash function and 24-byte (192 bit) internal state. While x is a desired (160-bit) pseudorandom num- ber (may be cut and the pieces combined for the re- quested number of bits), the following FIPS-186 algo- rithm generates m random values of x.
10 Read more
2. The parameter adjustment of test profile for different subdomains are uniform in original DRT, which may not be the best solution to improve the effectiveness of DRT. It is difficult for DRT to meet the real distribution of defect detection rate of the test case subdomain in time from the perspective of nature. Yang et al.  used the real-time defect detection rate to adjust the parameters during testing process, however, it just to improve the algorithm process and without the benefit of information of software, especially the information of test case. In software testing, the generation of testing data is one of the key steps and has great effect on software testing that include the selection and classification of test cases.
The proposed tests employ adaptive designs allowing for se- quential modifications of the test statistic based on accumulated data. Such adaptive designs have straightforward but not ex- clusive application in clinical trials. A large literature on the subject (e.g., Bauer and K¨ohne 1994; Proschan and Hunsberger 1995; Lehmacher and Wassmer 1999; M¨uller and Sch¨afer 2001; Brannath, Posch, and Bauer 2002; Liu, Proschan, and Pledger 2002; Brannath, Gutjahr, and Bauer 2012) deals with the deriva- tion of flexible procedures that allow for adaptations of the initial design without inflation of the Type I error rate. Some sequen- tial designs (e.g., Denne and Jennison 2000) also permit design adaptations, but the latter need to be preplanned and indepen- dent of the interim test statistics. Adaptive designs are employed
12 Read more
Adaptive filtering is a universal tool in many areas such as communications, control, and statistical signal processing. Basically adaptive algorithms appear whenever we encounter time-variant systems with little a priori information about the underlying signals. This is a topic with much practical applicability and interesting theoretical challenges. Although the ideas of adaptive signal reconstruction root back to Gauss and it can be considered a classical field with numerous textbooks and established practice, rigorous analysis of the steady-state and transient behavior of adaptive filters remains a formidable task in most of the cases. The reason is that adaptive filters are time-varying, often nonlinear and at the same time stochastic objects. When the regressors are random, as is the case in almost all applications, many classes
163 Read more
The testing technique based on the specification of the software system depends upon the specification either written in the formal or informal language. Above, a comprehensive overview of software testing concepts, techniques, and processes has been presented. There are three general styles of specification : Process algebras, which describe systems in terms of behavior and interaction of active agents; Algebraic specifications describe systems constructively in terms of applications of system operations. States are defined by construction from some defined basic structure, and passed as parameters; Model-based specifications construct explicit models of the system state and Show how the state can be changed by various operations. In specification-based testing, general form of the documentation is relative to program specifications. Early approaches looked at the Input/ Output relation of the program seen as a “black-box” and use the Equivalence classes partitioning, Boundary conditions, or Cause-effect graphs can be used to explore the systematic way to combine all the possible input conditions