The paper aims to optimize the therapeutic dose and time window of picroside II by **orthogonal** **test** in cerebral ischemic injury in rats. The forebrain ischemia models were established by bilateral common carotid artery occlusion (BCCAO) methods. The successful models were randomly divided into sixteen groups according to **orthogonal** experimental de- sign and treated by injecting picroside II intraperitonenally at different ischemic time with different dose. The concen- trations of neuron-specific enolase (NSE), neuroglial marker protein S100B and myelin basic protein (MBP) in serum were determined by enzyme linked immunosorbent assay to evaluate the therapeutic effect of picroside II in cerebral ischemic injury. The results indicated that best therapeutic time window and dose of picroside II in cerebral ischemic injury were ischemia 1.5 h with 20 mg/kg body weight according to the concentrations of NSE, S100B and MBP in se- rum. It is concluded that according to the principle of lowest therapeutic dose with longest time window, the optimized therapeutic dose and time window are injecting picroside II intraperitonenally with 20 mg/kg body weight at ischemia 1.5 h in cerebral ischemic injury in rats.

After the factors (meteorological parameters) of the experimental index (wind power ramp) is determined, these factors can be classified into a number of grades according to specific requirements. Then, the **orthogonal** table can be designed, which is based on Combinatorial Mathematics theory [35]. In the **orthogonal** table, the first column indicates the experiment number and the first row indicates the factors to be analyzed (wind speed, wind direction, pressure and relative humidity etc.). The remaining numbers indicate the grade number assigned to each factor. The **orthogonal** table is designed according to the principle that all the possible situations are considered for any two factors. An example of a 3 factor-2 grade level **orthogonal** **test** is shown in table 1.

Nickel particles coating was electrodeposited on carbon paper (CP) surface to form carbon-based nickel, which was then used as the cathodic catalyst in MEC to evolve hydrogen production. The primary electrodeposition parameters of electrolyte concentration of nickel sulphate, imposed current density, and plating time were considered. An **orthogonal** **test** was designed based on three factors and three levels. Results showed that the optimal operating parameters were as follows: 30 g L -1 nickel sulfate, 12 A m -2 imposed current density, and 10 min of plating time. The chemical composition and morphology characteristics were revealed using XRD and SEM, respectively. In addition, the carbon- based nickel was obtained under the optimal electrodeposition parameters and used as cathode in a stably running microbial electrolysis cell. The evolved gas volume was 8.1 ± 0.1 mL, the hydrogen content was 82.6 ± 2.1%.

10 Read more

Phospholipids are one of the major bioactive ingredients of Antarctic krill Euphausea superba . A feasible and effective extraction method of Antarctic krill oil was investigated and modified by **orthogonal** **test** which the ratio of solid to liquid was 1:2.5, extraction time was 5 min, ratio of ethyl acetate (EA) and n-butanol (BuOH) was 1:1. With this method, the extract of krill oil has a higher phospholipids content of 27.7% - 42.3%, together with total oil yields of 4.15% - 6.18%.

from Radix glycyrrhizae and Angelica dahurica (Fisch.) Benth. et Hook according to similarity-intermiscibility theory. In order to estimate and optimize the factors affecting extrac- tion to achieve maximum recovery, we investigated the effect of water-ethanol extraction on the multiple guidelines grading of the dried extract quantity, the contents of liquiri- tin and imperatorin by single factor experiment. A second- order polynomial model was set up to predict the acquired the multiple guidelines grading using **orthogonal** **test**. The interaction among solid-liquid ratio, ethanol concentration, extraction time and extraction times was investigated by the analysis of variance. Finally, the optimal extraction condi- tion was obtained and provided a basis for new drug devel- opment of the Fengshiding dropping pill.

Taking faceplate structure system as the research object, this paper puts forward the **orthogonal** **test** method, the finite element method, neural network and genetic algorithm the integrated use of the method of optimization design. Using finite element analysis software of the harmonic response analysis function, and find out the biggest influence on faceplate dynamic performance of natural frequency. **Orthogonal** **test** method and neural network combined with a small amount of sample to obtain a homogeneous dispersion , neat comparable sample points ; genetic algorithm optimization neural network model get the optimal solution in the global sense in a relatively short period of time ; examples show this optimization method can make up for the lack of algorithms and enhance each other's ability to adapt , broaden the application range of each other , and have the versatility to optimize complex structure.

Abstract: Canned taro (Colocasia esculenta) products on the market are tinplate cans with large capacity, which are difficult to carry. In order to develop a convenient and edible soft canned product of taro, the optimum formula of soft canned taro was studied by single factor and **orthogonal** **test** with sensory score and soluble solids content as indicators in this experiment. The same formulation technology was used to make soft canned taro and glass canned taro, and the two cans were compared with commercially available canned tinplate taro products. The results showed that the formulation technology of soft canned taro was as follows: with the addition of 0.6% honey, the solid -liquid ratio was 2.00:1, the sugar addition was 24% and the salt addition was 0.15%. The sensory evaluation of three kinds of canned taro was carried out by the method of fuzzy comprehensive evaluation. According to the comprehensive sensory evaluation, physical and chemical indexes and microbial indexes, the soft canned taro had the best sensory evaluation. Most of the other physical and chemical indexes were better than glass canned and tinplate canned, and reaches the commercial sterilization standard. The research on the processing conditions of soft canned taro provides a theoretical basis for the development of related products of taro.

were studied and optimized. Water-soluble polysaccha- ride was gained by the method of hot water backflow extraction. Liquid-solid ratio, temperature, time and pH were respectively studied in the experiment. Crude poly- saccharide was purified by Sevage method. The mass fraction of the polysaccharide was determined by phenol- sulfuric acid method. And the extraction conditions were optimized by **orthogonal** **test**.

Designers of program-centric persistence technologies are less constrained in their choice of storage format since they may legitimately assume that the persistent data will be solely accessed via the language infrastructure. The systems that adhere to the principles of **orthogonal** persistence have all used proprietary closed storage formats. There is no obvious technical reason why this is a necessary choice, although it may well maximise scope for achieving good performance. This may have been one factor behind the lack of commercial adoption of the various successful research prototypes. To invest in significant use of any closed storage system requires a very high level of trust in the long-term viability of the technology and the processes that support it. Other obvious limiting factors are the relatively limited scalability of those systems in terms of size and query performance, inevitable given the resources available.

22 Read more

Abstract— The paper presents an efficient construction algo- rithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an **orthogonal** forward regression, and the algorithm incrementally minimizes the leave-one-out **test** score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the density construction procedure. This is in contrast to an existing state-of-art kernel density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample optimized Parzen window density estimate. Our experimental results also demonstrate that the proposed algorithm compares favourably with the SVM method, in terms of both **test** accuracy and sparsity, for constructing kernel density estimates.

10 Read more

For all experiments, ants were trained to visit a feeder filled with biscuit crumbs in a channel system covered with a polarization filter transparency (POL filter). In 2008, ants were individually marked and had to visit the feeder at least five times before being tested. In 2010, ants were not individually marked because the training channel was so crowded; therefore, we could not record the exact number of visits at the feeder before testing. However, as the ants continuously shuttled between the feeder and the nest at high speed, most of the tested individuals likely had visited the feeder several times. We can also not exclude the possibility that some individuals saw a different e-vector orientation on days before the actual training. However, earlier tests showed that ants use the actual outbound path to determine their home vector direction (e.g. Wehner et al., 2002). In addition, the results of 2010 were very similar to those of 2008 when the ants were individually marked and their experimental experience was recorded (data not shown). Individual ants visiting the feeder were caught and transported – without sight of the sky or surroundings – to a distant **test** field (distance to the nest >150m). The **test** field was a flat area devoid of any vegetation or other landmarks, with a grid painted on the desert floor (grid width 1m, 15 ⫻ 15m), where the ant was released with a morsel of biscuit and where her homing direction could be recorded.

10 Read more

Samples were submerged in a PBS bath within a custom fixture, and the static and dynamic mechanical properties were measured in unconfined compression [37] between two rigid, impermeable platens using an axial testing system (Bose EnduraTEC ELF3220, Endur- aTEC Systems Corp., Minnetonka, MN) equipped with a 500-g load cell (Model 31 Miniature Load Cell, Sen- sotec, Columbus, OH). Several compression tests were performed in succession, as follows. A compressive tare load of 0.025 N was applied in load control for 5 min, and then a new sample thickness was calculated based on the initial thickness and the change in actuator dis- placement under the creep tare load. A stress relaxation **test** was performed to 10 % strain (based on the post- creep thickness), applied at a rate of 0.01 mm/s and held for 40 min to ensure the sample reached equilib- rium. Sinusoidal cyclic loading was then applied using a magnitude of 10 ± 1 % strain (9–11 % strain) for 10 cy- cles each at frequencies of 0.1, 1, and 10 Hz. A second stress relaxation **test** was performed to 20 % strain, applied at a rate of 0.01 mm/s and held for 40 min, followed by a second set of sinusoidal cyclic loading to 20 ± 1 % strain (19–21 % strain) for 10 cycles each at

10 Read more

Standard **orthogonal** projection images of a rectangular grid **test** phantom housed in a stereotactic frame (Olivier Bertrand Tipal frame, Tipal Instruments, Montreal) con- taining fiducial markers (Fig 1) were taken using digital imaging equipment commonly used for angiography (Siemens Polytron). The Olivier Bertrand Tipal frame has been described in detail elsewhere (5, 6) and was con- firmed to be accurate within 1 mm in our institution. The **test** phantom construction consisted of plexiglass grid plates with holes in a 2-D matrix of 1-cm spacings. A standard neuroradiographic technique (80 kilovolts [peak], four frames per second) with **orthogonal** (antero- posterior and lateral) views was used in imaging the phan- tom with the central ray directed along the x-axis of the frame (Fig 2). The location of a single grid plate was altered along the x-axis by up to 7 cm both anterior and posterior to the central plane. Source-to-frame and source-to-image plane distances were held constant at 68 and 104 cm, respectively, typical of actual usage in a neuroangiographic suite.

15 Read more

In order to perform a more systematic evaluation of the use of matrix factorisation for aligning words, we tested this technique on the full trial and **test** data from the 2003 HLT-NAACL Workshop. Note that the reference data has both “Sure” and “Probable” alignments, with about 77% of all alignments in the latter category. On the other hand, our system pro- poses only one type of alignment. The evaluation is done using the performance measures described in (Mihalcea and Pedersen, 2003): precision, recall and F-score on the probable and sure alignments, as well as the Alignment Error Rate (AER), which in our case is a weighted average of the recall on the sure alignments and the precision on the probable. Given an alignment A and gold standards G S and

to the kernel density construction process. The computational efficiency of using the delete-one cross validation is ensured by using the **orthogonal** least squares algorithm [21], [22], as is first shown in [20], and multiple-regularizers or local regulariza- tion is known to be capable of providing very sparse solutions [8], [14]–[16]. Our previous work on sparse regression mod- eling [16] has shown that the OFR based on the LOO **test** score and local regularization offers considerable advantages in real- izing these two critical objectives of sparse modeling over sev- eral other state-of-art methods. The current investigation shows that the proposed SDC method inherits these crucial advan- tages. Compared with the SVM method, our SDC algorithm is simpler to implement and has no critical algorithm param- eter that needs to be specified by the user. Several examples are used to illustrate the ability of this new SDC algorithm to con- struct efficiently a sparse density estimate with comparable ac- curacy to that of the Parzen window estimate. Some examples that have been used in the existing literature to investigate the SVM method are specifically chosen in order to compare the performance of our SDC algorithm with the SVM density esti- mation method. Our experimental results demonstrate that the SDC algorithm offers a viable alternative to the SVM method for constructing sparse and accurate kernel density estimates.

10 Read more

This paper introduces an automatic robust nonlinear identification algorithm using the leave-one-out **test** score also known as the PRESS (Predicted REsidual Sums of Squares) statistic and regularised **orthogonal** least squares. The proposed algorithm aims to achieve maximised model robustness via two effective and complementary approaches, parameter regularisation via ridge regression and model optimal generalisation structure selection. The major con- tributions are to derive the PRESS error in a regularised **orthogonal** weight model, develop an efficient recursive computation formula for PRESS errors in the regularised **orthogonal** least squares forward regression framework and hence construct a model with a good generalisation property. Based on the properties of the PRESS statistic the pro- posed algorithm can achieve a fully automated model construction procedure without resort to any other validation data set for model evaluation.

18 Read more

This paper took gluten, wheat flour and wheat bran as raw material, through stewing, making koji, sauce fermented grains ,fermentation and so on, preliminary studied the low-salt and solid-state fermentation processing technology to product soy sauce fermentation, The strain As3.951 (Aspergillus oryzae), QuJing of HETIANKUAN and QuJing of DINGXIN were used to make Koji. After testing the activity of different Koji in different fermentation time, the neutral protease activity and the number of spores were highest in culture made from DINGXING QuJing. Though the neutral protease activity was down when wheat gluten took place of soybean meal as main nitrogen source, it was still available in production. Also through the four factors’ **orthogonal** experiments of gluten 、 wheat flour 、 bran and water, the best proportion about them was 3 ： 4 ： 3 ： 6, and the most effective factor was gluten, the water was next to gluten. Finally, through the study of gluten soy sauce fermentation technology, which showed that amino nitrogen content and the generation rate of amino nitrogen were highest when the pH was 7, the concentration of brine solution was 12% and the Solid-liquid ratio was 1 ： 0.6 during the fermentation .

After the product/feature reaches the market, generally, the field errors, customer feed back, product/feature enhancement etc carried out either through DR (defect report) process or RFC ( request for change) process. This results in addition of **test** cases to **test** the new code. Very often the newly generated **test** cases to verify the code changes in the feature/product are added to existing optimized **test** suites directly without any optimization. This practice may defeat the purpose of optimized **test** concept using OA at times. It is time consuming to identify Parameters and Levels for small fixes, use tools like Minitab to define the optimization etc. In such cases it is advised to revisit the **test** case in a fixed interval, based on the field error rate or the feature enhancement, to address the newly added **test** for optimization. Though it looks like taking additional effort, in practical it is very small when compare to the effort and time what is saved because of the optimization.

To **test** the implementation of the non-**orthogonal** differencing schemes the code is used to solve the problem of two dimensional lid-driven cavity flow with inclined side wall provided as a benchmark **test** case by Demirdzic et al. [7]. The results are shown for Reynolds number 100 and 1000 for wall angle β = 45 ˚ and β = 30 ˚, using the deferred QUICK scheme of Hayase et al. [8]. The solution field is calculated using a mesh of 81 × 81 for Re = 100 and 101 × 101 for Re = 1000 and uniform grids are employed. The value of pressure under-relaxation factor α p is taken as 0.05 for Re = 100 and 0.01 for Re = 1000. The pseudo time step ∆ τ is used as 0.01 for Re = 100

Adaptive detection of signals embedded in Gaussian or non- Gaussian disturbance with unknown covariance matrix has been an active research field in the last few decades. Several generalized likelihood ratio **test**- (GLRT-) based methods are proposed, which utilize secondary (training) data, that is, data vectors sharing the same spectral properties, to form an estimate of the disturbance covariance. In particular, Kelly [1] derives a constant false alarm rate (CFAR) **test** for detecting target signals known up to a scaling factor; Robey et al. [2] develops a two-step GLRT design procedure, called adaptive matched filter (AMF). Based on the above methods, some improved approaches have been proposed, for example, the non-Gaussian version of Robey’s adaptive strategy in [3–6] and the extended targets version of Kelly’s adaptive detection strategy in [7]. In addition, considering the presence of mutual coupling and near-field eﬀects, De Maio et al. [8] redevises Kelly’s GLRT detector and the AMF. Most of the above methods work well, provided that the exact knowledge of the signal array response vector

11 Read more