The flow diagram for the R code is as shown in Figure 9. The base **design** is read in the form of a CSV file. The assumed form of the model is then specified in the code. Once the number of runs to be augmented is determined, all possible combinations of externally generated runs are read into the code in the form of a CSV file. The coding for categorical variables is different in various softwares. Therefore, instead of directly **using** the levels of the categorical variables as inputs, both the base **design** and the augmented runs were modified such that they reflected the model matrix values. These modified runs were then used as inputs to the R code. All the different possible combinations of all factors are augmented to the base **design** and the resulting **power** is calculated for all the terms in the model. These **power** values are then saved in a matrix and this information is used to pin point the exact runs that gives the maximum **power** with respect to one or many factors in the model. This method gives the best possible **design** augmentations so as to achieve maximum **power** with respect to the factor of interest when all possible combinations are tested. The user can specify any initial **design** that may or may not be balanced and can also observe the effects of different initial designs on the final output.

Show more
68 Read more

Over the past few decades the medical research disci- plines, especially the area of clinical trials, have widely emphasised the importance of rigorous **experimental** **design**, **statistical** analysis implementation and the correct use of statistics in peer-reviewed publications [2-6]. Although the general understanding of basic **statistical** methods (e.g. t-test, ANOVA) has improved in these disci- plines, some errors regarding their sound application and reporting can still be found. For instance, the t-test and ANOVA are fairly robust to moderate departures from its underlying assumptions of normally-distributed data and equality of variance (homogeneity) except in the presence of very small or unequal sample sizes, which can consid- erably decrease the **statistical** **power** of the analyses [7-10]. In order to promote a more rigorous application and reporting of data analyses in the area of clinical trials, the Consolidated Standards of Reporting Trials (CONSORT) have been adopted. CONSORT has significantly assisted researchers in improving the **design**, analysis and report- ing of clinical trials [11]. This is an example of how a com- munity-driven effort can help to improve the reporting of scientific information. Moreover, this instrument has shown to be helpful to authors, reviewers, editors and publishers to improve the readers' confidence in the scien- tific quality, relevance and validity of the studies pub- lished. We and others argue [12,13] that there is still a need for more rigorous approaches to reporting informa- tion relevant to gene expression data analysis. Therefore, it is important to have a closer look at the level achieved by recently published papers in connection to fundamen- tal factors for correctly justifying, describing and interpret- ing data analysis techniques and results.

Show more
participants were required to yield 80% **power** in a single voxel for typical block **design** activation levels. However, in fMRI the multiple comparisons problem and the associated potential for high levels of false positives requires us to go to stricter thresholds where they demonstrated that twice the number of participants would be needed to maintain the same level of **statistical** **power**. This recommended number of participants is higher than the vast majority of fMRI studies but is similar to independent assessments based on empirical data from a visual/audio/motor task (18) and from an event-related cognitive task (19). The Murphy and Garavan study (19) found that **statistical** **power** is surprisingly low at typical sample sizes (n<20) but that voxels that were significantly active from these smaller sample sizes tended to be true positives. Although voxelwise overlap may be poor in tests of reproducibility, the locations of activated areas provide some optimism for studies with typical sample sizes. It was found that the similarity between centres-of- mass for activated regions does not increase after more than 20 participants are included in the statistics. The conclusion can be drawn from this paper that a study with fewer numbers of participants than Desmond and Glover propose is not necessarily inaccurate but it is incomplete: activated areas are likely to be true positives but there will be a sizable number of false negatives. Needless to say, the required number of participants is influenced by the effect size which, in turn, is affected by the sensitivity of the

Show more
28 Read more

DOE modeling is increasingly being applied in industry and science, resulting in a powerful effective technique to identify, in an efficient way, influential variables as well as finding the correct settings to optimize the re- sponse. Whereas the application in industrial environment is targeted at identifying the optimal parameter set- tings of production equipment, and at the same time looks to minimize the number of trials for cost reasons, an indiscriminant and lazy application of statistics boundary conditions may be excusable. But this is not tolerable in an academic environment. In scientific research, the **statistical** relevance of response of a factor has to comply rigorously with the **statistical** requirements of minimizing Alpha and Beta-risk. Students have to put serious time and effort into the planning phase of the DOE to limit noise during experimentations and estimate up-front the potential **power** of their DOE in order not to invalidate the **experimental** results with a too low **power**. The here presented easy understandable 14-step approach with explicit focus on Beta-risk is ideally suitable for scientific investigation, guiding statistics-inexperienced students and researchers with a consistent fail-proof approach to obtain **statistical** significant research results.

Show more
“Heat transfer **Augmentation**” means Increase in heat exchanger’s performance with the help of **augmentation** techniques, this can lead to more economical **design** of heat exchanger that can also help to make energy, material and cost savings related to a heat exchange process. for the **augmentation** of heat transfer in tube. The subject of heat transfer growth in heat exchanger is of serious interest in the **design** of effective and economical heat exchanger Bergles et al., identified about 14 **augmentation** techniques used for the heat exchangers. These **augmentation** techniques can be classified into passive, active and compound techniques. Passive techniques do not require any type of external **power** for the heat transfer **augmentation** such as coating of Surface, rough surface, extended surface, displaced insert, swirl flow device, surface flow device, surface tension, additives for liquid, and additives for gases. Whereas, the active techniques need some **power** externally, such as electric or acoustic fields, surface vibration, mechanical aid, fluid vibration, injection, suction, jet impingement, etc., and compound technique are the combination of this two method.

Show more
The above mentioned theoretical concepts of the rate of reaction and the modeling of CSTR and PFR reactors, give us the bases for assuming the following considerations: the analysis of data from the **experimental** runs car- ried out with the nonthermal plasma reactors gave information about the effectiveness of the reactors in carrying out the conversion of heavy oil into light hydrocarbons. The variable calculated was the Efficiency (E), ex- pressed in the units (microliter/Joule) and defined as the ratio between the Total Flow Rate of Carbon Com- pounds (exiting the reactors) and the corresponding Input **Power**. Efficiency, then, was considered as the rate of products formation expressed as (microliter/Joule) equal to (microliter/W.sec), and transformed into (µmol/W. sec), regarding the gaseous products as ideal gases. Then, Efficiency gives the µmoles of gaseous products per unit of time and per unit of **power**. This is the main analogy that considers the efficiency as the rate of gaseous products formed in the nonthermal plasma reactors equivalent to the rate of the chemical reaction, defined as the number of moles of substance converted or produced per unit time and per unit volume (mol/Liter.min) [7]-[10]. Table 1 shows the **experimental** results obtained for all the reactors here assayed.

Show more
13 Read more

lower and upper survival functions for the next unit at the normal stress level. In Section 4 this approach is extended by including imprecision in the link parameter, which leads to observations at levels other than the normal stress to be transformed to interval-valued observations at the normal stress level. The width of these intervals increases as function of the difference between the corresponding stress level and the normal stress level. These interval-valued observations are then used in the NPI approach to lead to new lower and upper survival functions with increased imprecision. This can be interpreted as a straightforward method to provide robust predictive infer- ences based on ALT data. This is the first investigation towards developing NPI methods for ALT data, the general idea of **using** imprecision as a safe- guard against lack of detailed knowledge in ALT settings seems attractive. In Section 5 we present the results of an initial simulation study, which is the first step towards investigating and further developing our approach. In these simulations we investigate the method’s performance for the case that data are actually simulated from the assumed **power**-Weibull model, hence only parameter estimation and the connection with NPI for prediction at the standard stress level are investigated. In Section 6 we briefly discuss the use of imprecision in our method to provide robustness with regard to possible model misspecification. Section 7 provides brief concluding remarks about the proposed method and the future work planned in this research project.

Show more
31 Read more

the wildlife habitat. This ecological impact may exceed the value of the generated electricity especially in small river streams. In order to harness **power** from small river streams in Kenya, a new approach has to be examined. One possible solution is to use low-head micro hydro installations, such as the Gorlov helical turbine 9 . In this research we developed a low head helical hydrokinetic water turbine coupled to a generator for **power** generation targeting rural areas with small river streams in Kenya. The turbine can also be utilized in urban areas especially in large sewer water pipelines for **power** generation. The turbine uses water currents on naturally flowing rivers for **power** generation 7 . Since water **power** is more predictable and can be gated and stored for later use, it is believed that hydrokinetic helical water turbine is the best method of extracting renewable energy compared to wind and solar. In this **design** of plug flow, the turbine was plugged into a stable metal frame structure and locked and once the gate is opened to some height the turbine starts to rotate until it attains the nominal speed at which **power** generation is realized. The orientation of helical hydrokinetic turbines can be either horizontal or vertical. In this **design**, we chose the vertical **design** due to its ability to admit the flow from any direction and also the costs related to generator installation and transmission of **power** are extremely reduced in this **design**.

Show more
18 Read more

The capacity of a comparator is to create a yield voltage, which is high or low relying upon whether the sufficiency of the info is more noteworthy or lesser than a reference signal. It delivers a paired yield whose quality is in light of a correlation of two simple inputs. Run of the mill comparators have differential kind of building **design**, and they can be further partitioned into open-circle and element comparators. The open-circle comparators are in a broad sense operational enhancer [4]. Dynamic comparators use positive input like flip-failures to achieve the correlation of the extent in the middle of data and the outside reference signal. However these differential sorts of comparator are naturally intricate in configuration and expend high measure of force. On the other hand, single finished comparator structural engineering may be conveyed as a simple comparator rather than utilizing an entire simple square of comparator. The edge inverter quantization (TIQ) comparators have been utilized to outline the Flash ADC. The TIQ inverter based comparator comprises of two fell inverters as demonstrated in figure 1. The inverter obliges lesser number of transistors when contrasted with customary comparator. Truth be told, a customary comparator obliges two info sign, while inverter based comparator requires stand out information signal. The rationale reference or exchanging voltage is created by the inverter itself [5]. Graphically, the exchanging voltage can be distinguished at the convergence of the data voltage (Vin) and the yield voltage (Vout) signal. As of right now, both the transistor

Show more
the wall. In combination with the reciprocating flow, substantial axial heat transport results [6].The left wall of the enclosure is modelled as a rigid boundary that vibrates harmonically in time representing the motion of a loudspeaker diaphragm or vibration of a commercial ultrasonic mixer probe. The vibrating boundary is the acoustic source in this geometry and a sound field in the enclosure is created by this source. We are able to model the physical processes including the compression of the fluid and the generation of the wave, acoustic boundary layer development, and finally the interaction of the wave field with viscous effects and the formation of streaming structures [7].Acoustic streaming induced by sonic longitudinal vibration is investigated. Acoustic streaming induced by ultrasonic flexural travelling waves is studied for a micro pump application and the negligible heat transfer capability of acoustic streaming is reported Nguyen and White [8]. Mozurkewich presented the results of an **experimental** investigation of heat transfer from a cylinder in an acoustic standing wave generated in a free stream. He established that for a cylinder of fixed diameter and a fixed acoustic frequency, the Nusselt number showed a distinctive variation with acoustic amplitude. At high amplitude, the Nusselt number followed a steady-flow, forced-convection correlation (time-averaged over an acoustic cycle) while at low amplitude, the Nusselt number had a constant value determined by natural convection [9].The acoustic field in a fluid with attenuation, due to viscosity and thermal conduction, is always accompanied by the unidirectional flow called acoustic streaming. Bradley and Nyborg [10].Considered a problem in which a steady-state sonic wave propagates in a longitudinal direction in a fluid enclosed between two

Show more
Regression analysis is a **statistical** procedure used to find relationship among a set of variables. Regression finds the line that best fits the observations. It does this by finding line that result in the lowest sum of squared errors. Since the line describes the mean of effect of in- dependent variables, by definition, the sum of actual errors will be zero. It you add up all the values predicted by the model, the sum is the same. That is, the sum of the negative errors (for points below line) will exactly offset the sum of the positive errors (for points above the line). Summing just the errors wouldn’t be useful because the sum is always zero. So, instead, regression uses the sum of the squares of the errors. An ordinary Least Squares regression finds the line that result in the lowest sum of squared errors.

Show more
Powder x-ray diffraction analysis has been used to char- acterize the physical state of the drug in the polymeric matrices of solid dispersions. The diffraction studies of the drug, polymer mixture and the optimized solid dis- persion formulation were performed in a powder x-ray diffractometer with a vertical goniometer (PW 1050/37, Philips, Netherlands). PXRD patterns were recorded **using** monochromatic Cu K α radiation with Ni filter at a voltage of 40 kV and a current of 20 mA between 5 to 80°C 2θ values. 15

An investigation into utilising SSSC founded **power** oscillation damping controller to increase the transient steadiness of a multimachine energy procedure is awarded. Transient balance analysis has no longer too lengthy in the past develop to be a most important obstacle in the operation of **power** methods on account that of the developing stress on vigour method networks. Apart from enabling a better utilization of present energy packages potential, information controllers can manipulate group parameters, an identical to the magnitude of sending-end and receiving-conclude voltage, and animated-reactive vigour, to reinforce every the transient stability affectivity of the approach. In thesis work describe the Static synchronous sequence compensator device that controls the vigour float of the transmission line within the path of severing disturbances. Fundamentals of the Static synchronous series compensator are that it does no longer include cumbersome points like reactor and inductor so this gadget is less pricey compared to average instruments. And characteristic of Static synchronous sequence compensator is that injects or absorbs the reactance in the approach and manipulates the energy. Simulation results got for chosen bus-2 in two computing gadget energy system indicates the efficacy of this compensator as one of the vital data instruments contributors in controlling energy flows, achieving the favoured valued at for animated and reactive powers, and damping oscillations without problems.

Show more
In this project, several tools and equipments will be used in order to fulfill the objectives that have been stated. There are six sequences that must be followed in order to get a systematic and well organized process of this project. The previous **design** that has been made need to be verified first to ensure that the dimension, shape and the data collected is accurate and able to be used to fabricate the mould. If not, the additional elements must be added to fulfill the standard requirements needed in designing mould and manufacture them later. The process of verify the data is not necessary to use a very accurate equipment such as CMM (Coordinate Measuring Machine) because the previous study has conducted the procedure. For additional element added, the simple tool such as vernier caliper is used to verify the dimension given. After verify, designing process will be carried out and the existing **design** will be alter to a standard requirement and functional part is included. the **design** process might be use the approach of CAD (computer Aided **Design**) software to develop or alteration process a 3-D solid modeling to the core and cavity with additional element for a complete mould **design** such as water connector, chamfer and a little bit adjustment to the sprue. The software that will be used during this research is Solid Works and the approach of DOE (**Design** of **Experimental**) will be carried out to select the proper parameter to get the optimum parameter used in fabricating the keris mould. The **design** next will be analysis either can be used for further process such as machining. The method that will be used to analyze the **design** made is MoldflowXpress. After all analysis had been done **using** the stated technique, the solid modeling data will be transforming into analysis part that will be done **using** a suitable software to investigate the effect of the **design** made to the results that had been gained **using** MoldflowXpress analysis. The software used is Minitab which can be used to analyze the significant effect of the parameters selected to the responds involved (injection time of plastic keris mould).

Show more
24 Read more

– Jun Cai, and Istvan Erlich proposed a novel approach which includes all possible active and reactive **power** controls based on the multi-input multi-output transfer (MIMO) function and singular value decomposition (SVD) [14]. As seen in the above discussed approaches, the classical methods consider the active powers at all buses as constant. Voltage stability control methods such as reactive **power** compensation, under voltage load shedding, and transformer tap changers can also be taken into consideration **using** this method. These controls are selected as inputs to the MIMO system. The incremental changes in the bus voltage magnitudes are considered as the output variables. The input singular vectors are used to select the most suitable control signal for the improvement of steady state voltage stability and the output singular vectors provide an overview of the most critical buses that are affected by the static voltage stability. Since the inputs and outputs of the real system can be restricted to a small range, this method can also be applied to large **power** system networks.

Show more
88 Read more

Available Online at www.ijpret.com 451 VOS) since we exploit error resiliency **using** different approach. Since DSP blocks mainly consist of adders and multipliers (which are, in turn, built **using** adders), We propose logic complexity reduction at the transistor level. We apply this to addition at the bit level by simplifying the mirror adder (MA) circuit. We develop imprecise but simplified multipliers, which provide an extra layer of **power** savings over conventional low-**power** **design** techniques. Complexity reduction leads to **power** reduction in two different ways.

Show more
11 Read more

In this paper, we present various designs of a three-bit comparator circuit **using** existing reversible logic gates. The present paper proposed a new gate, called reversible DG gate which was used in the **design** of comparator. All the comparators have been modeled and investigated **using** VHDL and Quartus II.

FPGA also has building blocks such as combinational, sequential, memory, register, Arithmetic. This will be provide short time **design** and consume low **power** at standby mode also its cost is low at **design** level. The delay also can make reduce in the application specific integrated circuit. NATURE architecture based on the CMOS/nanotechnology. By designing **using** nanoRAM have to met main problem as logic density and run time configuration. From changing as a CMOS logic we can overcome this problem. memory will be based on the CMOS device as static random access memory. It consume low **power** for performing the read and write operation compare with other logic device. It also used a pipelining concept for improve operation of the device.

Show more
In this thesis we will investigate the benefits of stacked topology with circuits both in linear and non linear characteristics. This will provide us a good comparison opportunity whether we can make use of **power** efficiency enhancement **using** switch mode **power** amplifiers for MMIC **power** amplifiers at high frequencies. In this way we will try to increase the output **power** about 2 times more than a common source stage and at the same time keeping a relative high PAE.

71 Read more

This present study aimed to develop a genetic algorithm (ESDCC-GA) for economic **statistical** **design** of ̅ control charts under uniform and non-uniform sampling interval. The proposed algorithm is designed to solve the constrained problem, which involves the simultaneous use of continuous and discrete decision variables. To verify the performance of the proposed GA, the numerical example of Rahim and Banerjee (1993) with a Gamma failure mechanism is illustrated in this paper.

12 Read more