To analyze and evaluate the effects and optimal design of these instruments, eco- nomic innovation research has established a large number of empirical research methods. Along with the overall expansion and professionalization of experimental eco- nomics, behavioral evidence collected in laboratoryexperiments have become a vital complement to economic innovation research in recent years. Following Sørensen et al. (2010) and Chetty (2015), we suggest that lab experiments constitute a promising addition to the methodological toolkit in innovation research, thus advancing novel in- sights and providing predictions and policy implications by incorporating behavioral factors. We thus argue that laboratoryexperiments should be used if they yield add- itional evidence unattainable by other methods in a particular field of study. This reso- nates with the arguments by Falk and Heckman (2009); Chetty (2015); Madrian (2014); and Weimann (2015), who propose a pragmatic approach concerning the use of evi- dence derived from experimental methods, arguing that all empirical methods should be viewed as complementary (Falk and Heckman 2009). In this paper, we aim to con- tribute to the growing field of experimental innovation research, firstly by outlining the advantages and limitations of different methodological approaches in innovation re- search and more specifically laboratoryexperiments. Secondly, since former papers have not attempted to summarize and structure the existing experimental literature, we provide a literature review of the existing experimental approaches to the field of innovation policy with the most important studies from four sub-fields in which lab ex- periments have been conducted to date. We conclude by emphasizing the further use of laboratoryexperiments to innovation research.
Abstract. The spatial assessment of short time-step precipi- tation is a challenging task. Low density of observation net- works, as well as the bias in radar rainfall estimation moti- vated the new idea of exploiting cars as moving rain gauges with windshield wipers or optical sensors as measurement devices. In a preliminary study, this idea has been tested with computer experiments (Haberlandt and Sester, 2010). The results have shown that a high number of possibly inaccu- rate measurement devices (moving cars) provide more reli- able areal rainfall estimations than a lower number of precise measurement devices (stationary gauges). Instead of assum- ing a relationship between wiper frequency (W) and rainfall intensity (R) with an arbitrary error, the main objective of this study is to derive valid W –R relationships between sen- sor readings and rainfall intensity by laboratoryexperiments. Sensor readings involve the wiper speed, as well as optical sensors which can be placed on cars and are usually made for automating wiper activities. A rain simulator with the ca- pability of producing a wide range of rainfall intensities is designed and constructed. The wiper speed and two optical sensors are used in the laboratory to measure rainfall inten- sities, and compare it with tipping bucket readings as refer- ence. Furthermore, the effect of the car speed on the estima- tion of rainfall using a car speed simulator device is investi- gated. The results show that the sensor readings, which are observed from manual wiper speed adjustment according to the front visibility, can be considered as a strong indicator for rainfall intensity, while the automatic wiper adjustment show weaker performance. Also the sensor readings from optical sensors showed promising results toward measuring rainfall
The state of granular streams at 0.9 m of vertical dis- tance from the funnel aperture at each ambient pressure is summarized in Fig. 11. Because the transparent part of the vacuum chamber at Kobe University is 1.0 m high, the maximum drop height of the particles for tak- ing images from outside of the chamber is approximately 0.9 m. In air, all particles except for the 100 μm glass beads formed millimeter-size agglomerates at 0.9 m. Al- though the 100 μm glass beads did not form agglomer- ates at 0.9 m from the funnel aperture, agglomeration was initiated at approximately 1.2 m from the aperture. Based on Fig. 9a, b, the 50 μm glass beads can form ag- glomerates more easily than the 100 μm glass beads. The observational results of the granular streams were in accordance with predictions based on the critical col- lisional velocity. Contrary to the prediction from the critical collision velocity, although the prediction was for spherical particles, alumina particles and silica sand grains were able to form agglomerates in air. A previous study showed that irregularly shaped particles have a higher probability of sticking and a higher capture vel- ocity than spherical particles (Poppe et al. 2000). The reason was thought to be due to multiple opportunities of hitting and the resultant energy dissipation allowed by the irregular shape. Royer et al. (2009) also conducted laboratoryexperiments with irregularly shaped particles and found less agglomeration of irregular particles, a tendency opposite to that observed in the current study. The irregularly shaped particles used in Royer et al. (2009) were quite rounded, that is, not angular, while those used in this study were elongated and angular. This difference in irregularity would result in the differ- ent degrees of agglomeration. In fact, the irregularity of diamond particles of 1.3–1.9 μm in diameter (average diameter 1.5 μ m) used by Poppe et al. (2000) exhibiting higher sticking velocity than the spherical particles is similar to those used in our study, although the collision velocity in the current study was lower than that of Poppe et al. (2000) and the energy dissipation mechan- ism may have been different. Further study is required to reveal the formation process of the agglomerates in a granular stream.
In engineering education, practical work is an important complement to theoretical courses, students come to the laboratories to conduct experiments and appreciate the discrepancies between their observations and predictions according to the theoretical courses. However, due to several limiting factors in traditional laboratoryexperiments, students cannot get the necessary experiences. Remote laboratories for instruments control are an important tool for teaching students in physics and of control systems engineering and experimentation with a wide range of feedback devices .
This paper has presented results from a set of laboratoryexperiments. Field conditions are naturally different. Drain- age oflower peat layers is probably more delayed than in the laboratory because the water cannot freely flow off. In saturated peat, with less lateral flow, additional macropores created by droughts might not emerge during droughts or may close more rapidly afterwards. Furthermore, rapid flows through macropores under laboratory conditions might enlarge or sustain preferential flowpaths more than under field conditions. However, field evidence presented by Holden & Burt (2002) during the dry summer of1999 in the North Pennines corroborates findings presented above. Furthermore, pre-drought simulation in the laboratory pro- duced runoff from peat blocks that was not significantly different from runoff under simulated rain in the field (Holden & Burt, 2002). Hence the laboratory blocks seem to replicate field conditions reasonably well. Increased occur- rence ofdroughts in the f uture might theref ore result in changes to hydrological processes in blanket peat catchments.
Abstract—Laboratoryexperiments for image processing courses are usually software implementations of processing algorithms, but students of image processing come from diverse backgrounds with widely differing software experience. To avoid learning overhead, the software system should be easy to learn and use, even for those with no exposure to mathematical programming languages or object-oriented programming. The class library for image processing (CLIP) supports users with knowledge of C, by providing three C++ types with small public interfaces, including natural and efficient operator overloading. CLIP programs are compact and fast. Experience in using the system in undergraduate and graduate teaching indicates that it supports subject matter learning with little distraction from language/system learning.
Overall, the students’ assessment of the use of this virtual lab design was positive, and they believed that running the exper- iments shed more light into topics that had not surfaced in the lectures. Although the student body was not tested immediately after the experiment, they were able to answer questions cor- responding to the topics covered by the VSE during their final exam, showing that the experiment met its objective to instruct students on the effect of communication layer protocols on the power consumption of networking nodes in WSN. The collab- oration aspects of these experiments made the overall experi- ence more interesting and fun. All components were running throughout the entire virtual lab session, except the five grid nodes which were utilized less than 10% of the total virtual lab time. Unfortunately, even though the components were running, the voice of the Greek instructor did not reach the students in Japan approximately 30% of the time for this particular exper- iment, probably because of bandwidth limitations (the students in AIT had no such communication problems). Therefore, an instructor wishing to enhance his/her course with such virtual laboratoryexperiments should consider that the more integrated
In this article, we provide insights about how well laboratoryexperiments in economics replicate. Our sample consists of all 18 between-subject laboratory experimental papers published in the American Economic Review and the Quarterly Journal of Economics in 2011- 2014. The most important statistically significant finding, as emphasized by the authors of each paper, was chosen for replication (see the Supplementary Materials, Section 1 and Tables S1 and S2, for details). We use replication sample sizes with at least 90% power [M=0.92, median(Mdn)=0.91] to detect the original effect size at the 5% significance level. All of the replication and analysis plans were made publicly known on the project website (see the Supplementary Materials, Section 1, for details) and were also sent to the original authors for verification.
Fig. 10 above shows histograms of the relative frequencies of the logarithm of the amplitude of events in AE simulations, which has a strong similarity with the magnitude, as a measure of the size of seismic events, for the compression test (left) and three points bending (right). It is considered that the compression test, in which the failure mechanism is governed mainly by indirect tension and shear, would be a more appropriate model for examining the phenomenon of seismic rupture. The relation between the maximum size of the fracture that causes vibrations of amplitude a and the size of the tested sample is presently being studied. The analogy is further illustrated by Fig. 11, which shows the frequency distribution of seismic magnitudes, represented as Log (a), of seismic events registered in a 400 km radius circular region centered at the Angra dos Reis NPP, Brazil, between 1961 and 2012, i.e. during approximately a 50 years period. The existence of an upper limit for the magnitude, clearly hinted by the results of small scale laboratoryexperiments and implicit in all probability distributions shown in Fig.1, cannot be inferred from Fig. 11, but remains a plausible assumption.
A questionnaire based survey was conducted to evaluate the AR applications. Respondents to the evaluation (a hundred and forty-eight engineering and sciences students from two institutions, the Addis Ababa University, Addis Ababa, Ethiopia and the ObafemiAwolowo University, Ile-Ife, Nigeria), were asked to anonymously evaluate both AR applications described in section 3. The concept of deploying AR application for laboratoryexperiments is equally applicable for graduate students with lab-experiment requirement in their programs. Open ended questionnaire were used to collect data (subjective opinion) about the quality and satisfaction with the AR experiments. Majority of respondents reported that the applications were their first (conscious) contact with AR technology, although their mean age range was between 21 and 24 years. A professor from each insti- tution served as a voluntary local contact that also ensured that consenting volunteers (valid students) partici- pated in the study without incentives, risks and disadvantages. Participants were provided with an information sheet that informed them of the purpose of the study, provided assurance of confidentiality, the intended use and end-of-life of the collected data. The sample population was found to be quite diverse with 74% undergraduate students (40% from final year, 21% from 1st year and rest are sophomores and Juniors), 19% female, 41% were from the department of Computer Science & Engineering, 27% from the department of Electronic & Electrical Engineering, 10% from Computational Science, while 22% were from Physics and Mathematics departments. Data analysis involved identifying commonalities from the supplied responses for subsequent categorization and counting.
SRDC began implementation of the corresponding field experiment, the learn$ave project – a random-assignment demonstration project – shortly after completion of this study. learn$ave can be thought of as a pilot project for encouraging the working poor to save for post- secondary education. Participants were recruited for an information session that describes the project and the odds of being randomly assigned to a treatment group or the control group. Those in the learn$ave treatment groups received various levels of post-secondary education expenses matched to personal saving levels, and some received financial counseling. The control group was surveyed, but enjoyed none of the benefits of the treatment group. Generally speaking, with most random assignment projects, volunteers are randomly assigned into treatment groups and a control group after the information session. SRDC assigned volunteers to treatment groups that varied by province, match rate and financial counseling. As part of the implementation, SRDC conducted 36 focus groups on participants and non-participants across Canada. Of the project participants, separate focus groups were formed of those who saved regularly and those who did not save regularly. Their findings, published in the implementation report (Kingwell et al, 2005) are strongly similar to our results and provide support for the validity of laboratoryexperiments in parameterizing policies. We highlight some of those similarities.
Although many authors have mentioned the relation between a typical PWYW transaction and the dictator and the trust game, this relation has not been discussed in detail. The first goal of this paper is to introduce the PWYW Game. Using the PWYW Game, we illustrate the PWYW pricing situation as a sequential one shot game and discuss its relation to dictator and trust games. Since the DG and the TG are subgames of the PWYW Game, a closer look at the results from laboratoryexperiments can inform sellers about the applicability of PWYW pricing. By scrutinizing the results derived from the DG and the TG game, we expose the factors that are likely to contribute to higher prices under PWYW pricing. To outline these factors is the second goal of the paper.
Abstract. We discuss a new phenomenon of turbulent ther- mal diffusion associated with turbulent transport of aerosols in the atmosphere and in laboratoryexperiments. The essence of this phenomenon is the appearance of a nondif- fusive mean flux of particles in the direction of the mean heat flux, which results in the formation of large-scale in- homogeneities in the spatial distribution of aerosols that ac- cumulate in regions of minimum mean temperature of the surrounding fluid. This effect of turbulent thermal diffusion was detected experimentally. In experiments turbulence was generated by two oscillating grids in two directions of the im- posed vertical mean temperature gradient. We used Particle Image Velocimetry to determine the turbulent velocity field, and an Image Processing Technique based on an analysis of the intensity of Mie scattering to determine the spatial dis- tribution of aerosols. Analysis of the intensity of laser light Mie scattering by aerosols showed that aerosols accumulate in the vicinity of the minimum mean temperature due to the effect of turbulent thermal diffusion.
All of these observations imply that the EM field can sig- nificantly affect and even control the mechanical stability of systems that are close to the critical state. Therefore, it seems important to confirm by laboratoryexperiments the possibil- ity of EM control of mechanical behavior of systems that mimic large-scale fault dynamics, namely the systems mani- festing a stick-slip effect. The present paper deals with the re- sults of experiments with the spring-slider system subjected to a constant pull, with weak mechanical or EM periodic forces superimposed on it.
Abstract. Measurements of spatial and temporal changes in the grain-size distribution of the bed surface and substrate are crucial to improving the modelling of sediment transport and associated grain-size selective pro- cesses. We present three complementary techniques to determine such variations in the grain-size distribution of the bed surface in sand–gravel laboratoryexperiments, as well as the resulting size stratification: (1) particle colouring, (2) removal of sediment layers, and (3) image analysis. The resulting stratification measurement method was evaluated in two sets of experiments. In both sets three grain-size fractions within the range of coarse sand to fine gravel were painted in di ff erent colours. Sediment layers are removed using a wet vacuum cleaner. Subsequently areal images are taken of the surface of each layer. The areal fraction content, that is, the relative presence of each size fraction over the bed surface, is determined using a colour segmentation al- gorithm which provides the areal fraction content of a specific colour (i.e. grain size) covering the bed surface. Particle colouring is not only beneficial to this type of image analysis but also to the observation and under- standing of grain-size selective processes. The size stratification based on areal fractions is measured with su ffi cient accuracy. Other advantages of the proposed size stratification measurement method are (a) rapid col- lection and processing of a large amount of data, (b) a very high spatial density of information on the grain-size distribution, (c) the lack of disturbances to the bed surface, (d) only minor disturbances to the substrate due to the removal of sediment layers, and (e) the possibility to return a sediment layer to its original elevation and continue the flume experiment. The areal fractions are converted into volumetric fractions using an existing conversion model.
When students' academic achievement in the traditional laboratory format was compared with their achievement in the guided inquiry laboratory format, the findings revealed that the students' academic achievement in the guided inquiry format was significantly higher than their academic achievement in the traditional laboratory format (Table 1). The guided inquiry laboratoryexperiments have led to an increase in the students' academic performance. Similarly, Tobin et. al (2012) applied a three day inquiry-based workshop for K-8 teachers and their results displayed that the teachers developed an understanding related to some important energy concepts. In some of the studies, we can see similar results. Gaddis, B. A. and Schoffstall, A. M. (2007) developed guided-inquiry experiments in organic chemistry laboratory and stated that these applications will develop students' conceptual understanding in the laboratory. Also, McCright (2012) analyzed the effect of inquiry based learning project related to the environmental problems like climate change, on the change in students' beliefs, attitudes, behaviors and scientific and quantitative literacy. The results of the study displayed that the students' knowledge related to scientific problems increased and their research skills developed. Sesen and Tarhan (2013) investigated the effects of the inquiry-based laboratory activities on high school students' understanding of electrochemistry and their attitudes towards chemistry and laboratory activities. The results of the study displayed that students in the inquiry based laboratory formatlearned concepts related to electrochemistry and produced significantly higher positive attitudes towards chemistry and laboratory work.
Remote laboratory allows users to access the laboratory instruments including the programmable devices remotely to perform their laboratoryexperiments. The existing remote laboratories on Digital Signal Processor hardware uses either the server machine to control the test instruments using GPIB interface or control is established through the DAQ cards . This makes architecture more specific and cannot be reused for other laboratory implementations. This implementation avails laboratory facility for complete twenty four hours a day and will increase the productivity of the laboratory setups and measuring instruments. Remote instrumentation laboratory for DSP training uses client server methodology and connects multiple clients to the server using Virtual Instrument application . Thin Client Server manages input and outputs between client and servers. Remote access tool  used for this type of laboratory implementation is selected based on real time access parameters like data speed, security protocol and ability to establish multiple user environment etc.
The Marchenko redatuming method estimates surface-to- subsurface Green’s functions. It has been employed to diminish the effects of multiples in seismic data. Several such methods rely on an absolute scaling of the data; this is usually con- sidered to be known in synthetic experiments, or is estimated using heuristic methods in real data. Here, we show using real ultrasonic laboratory data that the most common of these methods may be ill suited to the task, and that reliable ways to estimate scaling remains unavailable. Marchenko methods which rely on adaptive subtraction may therefore be more ap- propriate. We present two adaptive Marchenko methods: one is an extension of a current adaptive method, and the other is an adaptive implementation of a non-adaptive method. Our results show that Marchenko methods improve imaging com- pared to reverse-time migration, but less so than expected. This reveals that some Marchenko assumptions were violated in our experiment and likely are also in seismic data, showing that laboratoryexperiments contribute critical information to the development and testing of Marchenko-based methods.
27 Bright et al. (2008) state that while the transition towards increased usage of remote labs may appear on the surface to be a simple change of access mode, there are a wide range of factors at play. The environment in which learning occurs whether online or face to face involves a complex array of factors that influence learner achievement and satisfaction (Stein & Wanstreet 2003). Bright et al. (2008) believe that if these factors are not properly considered during the design of a laboratory experience have the potential to significantly affect student learning outcomes. Their study reviews these factors and the impact each may have on learning outcomes. The primary conclusion of the work is to assert that the complexity of the factors should not be underestimated.