In this paper, the existence of chaotic behavior in the single-well Duffing Os- cillator was examined under parametric excitations using Melnikov method and Lyapunov exponents. The minimum and maximum values were obtained and the dynamical behaviors showed the intersections of manifold which was illustrated using the MATCAD software. This extends some results in the li- terature. Simulation results indicate that the single-well oscillator is sensitive to sinusoidal signals in high frequency cases and with high damping factor, the amplitude of the oscillator was reduced.
Abstract. Using the single-well push–pull (SWPP) test to determine the in situ biogeochemical reaction kinetics, a chase phase and a rest phase were recommended to increase the duration of reaction, besides the injection and extrac- tion phases. In this study, we presented multi-species reactive models of the four-phase SWPP test considering the well- bore storages for both groundwater flow and solute transport and a finite aquifer hydraulic diffusivity, which were ignored in previous studies. The models of the wellbore storage for solute transport were proposed based on the mass balance, and the sensitivity analysis and uniqueness analysis were em- ployed to investigate the assumptions used in previous stud- ies on the parameter estimation. The results showed that ig- noring it might produce great errors in the SWPP test. In the injection and chase phases, the influence of the wellbore stor- age increased with the decreasing aquifer hydraulic diffusiv- ity. The peak values of the breakthrough curves (BTCs) in- creased with the increasing aquifer hydraulic diffusivity in the extraction phase, and the arrival time of the peak value became shorter with a greater aquifer hydraulic diffusivity. Meanwhile, the Robin condition performed well at the rest phase only when the chase concentration was zero and the solute in the injection phase was completely flushed out of the borehole into the aquifer. The Danckwerts condition was better than the Robin condition even when the chase con- centration was not zero. The reaction parameters could be determined by directly best fitting the observed data when the nonlinear reactions were described by piece-wise linear functions, while such an approach might not work if one at- tempted to use nonlinear functions to describe such nonlin-
Abstract. The hydrocarbon reservoirs are extremely complex, each reservoir having its own identity. Reservoirs heterogeneity (mainly regarding the layered ones) frequently results in low recovery efficiencies, both under the primary regime and when different agents are injected from the surface. EOR processes efficiency depends on how detailed the reservoir is known and on the information related to fluids flow through reservoir. There are certain analyzes, investigations and tests providing good knowledge about the reservoir. The tracer tests are among them, being frequently used to water injection processes. Depending on the method used, IWTT (Interwell tracer test), SWTT (Single-Well Tracer Test), TWTT (Two-Well Tracer Test), information are obtained as related to: the setting of the preferential flow path of the injected fluid, the identification of water channels, evidencing the geological barriers, determining the residual oil saturation, around the well bore or along the tracer’s path between two wells. This paper is focused on ICPT Cˆampina efforts related to the use of the chemical tracers to the water injection processes applied to the oil reservoirs of Romania. It describes the usual tracers and the methods used to detect them in the reaction wells. Up to now, more than 50 tests with IWTT tracers have been performed on-site and this work presents some of their results.
Abstract— As one of the most promising nonlinear di- mensionality reduction techniques, Isometric Mapping (I- SOMAP) performs well only when the data belong to a singlewell-sampled manifold, where geodesic distances can be well approximated by the corresponding shortest path distances in a suitable neighborhood graph. Unfortunate- ly, the approximation gets less and less precise generally as the number of edges of the corresponding shortest path increases, which makes ISOMAP tend to overlap or overcluster the data, especially for disjoint or imperfect manifolds. To alleviate this problem, this paper presented a variant of ISOMAP, i.e. Edge Number-based ISOMAP (EN- ISOMAP), which uses a new variant of Multidimensional Scaling (MDS), i.e. Edge Number-based Multidimensional Scaling (EN-MDS), instead of the classical Multidimensional Scaling (CMDS) to map the data into the low-dimensional embedding space. As a nonlinear variant of MDS, EN- MDS gives larger weight to the distances with fewer edges, which are generally better approximated and then more trustworthy than those with more edges, and thus can preserve the more trustworthy distances more precisely. Finally, experimental results verify that not only imperfect manifolds but also intrinsically curved manifold can be visualized by EN-ISOMAP well.
With difference to the reference case, in the first step of sim- ulation, the water extraction of a singlewell near a high per- meable fault zone, a barrier effect is produced, i.e. the water level at both sides of the fault zone are different. The results of water saturation of the system with a 45 ◦ high permeabil- ity fault zone and a singlewell extraction (Fig. 4) show a
General Purpose Computing on Graphics Processing Units (GPGPU) is the technique of using a GPU to solve computational problems which are traditionally handled by CPU. Earlier, GPU was designed only for handling computations needed for computer graphics. GPUs are only numeric computing engines, they may perform well for graphical applications but in some cases may not perform well on some tasks on which CPUs are designed to perform well. So, the most applications will use both CPUs and GPUs, executing the sequential parts of program (or application) on CPU and numerically intensive parts on GPUs.
These types of well profiles are normally used in Appraisal wells to assess the extent of a newly discovered reservoir. It is a type of wellbore drilled when there is a hindrance, such as a salt dome, or when the well has to be side-tracked The well is drilled vertically to a deep KOP and then inclination is built quickly to the target.
Recent technological progress in manipulating low-entropy quantum states has motivated us to study the phenomenon of interaction blockade in bosonic systems. We propose an experimental protocol to observe the expected bosonic enhancement factor in this blockade regime. Speciﬁcally, we suggest the use of an asymmetric double-well potential constructed by superposition of multiple optical tweezer laser beams. Numerical simulations using the MCTDHB method predict that the relevant states and the expected enhancement factor can be observed.
Where soil conditions are favourable and open excavation is feasible, construction of a pumping station will proceed as for any normal tank. Under these circumstances the shape and design of the station will be governed by the requirements of fitting the equipment and the cost of construction. Sometimes a rectangular plan shape may be found to be the best. In many cases it will be found that water charged sands, silts or clays exist for the full depth of the station. In these circumstances the method of construction frequently adopted is the open well caisson technique. A reinforced concrete shell, sometimes with a cutting edge on the lower perimeter, is formed above the groundwater level and is progressively sunk by excavating within the caisson wall. The caisson normally sinks under its own weight but where sinking difficulties are encountered, kentledge (temporarily superimposed dead weight) may be used to provide the additional weight. An alternate design using a lubricant, such as Bentonite, to reduce the resistance to sinking may also be considered.
The idea of application-dependent logic block testing is presented in . In this BIST scheme, each used logic block is exhaustively (or super-exhaustively, i.e., all possible transitions) tested while all these logic blocks are tested concurrently. The global interconnect is reprogrammed in such a way that the test signals are routed to each logic block. A Linear Feedback Shift Register (LFSR) or a binary counter for generating test vectors is connected to the inputs of all used logic blocks. The logic block outputs are observed through an internal response compactor (e.g., an XOR tree). The response compactor can be combined with a response (parity) predictor, as will be explained shortly, such that a unique pass/fail signal can be generated. The LFSR and the XOR tree are implemented in the available unused logic blocks. Since the LFSR or binary counter generates all possible patterns (2 n patterns for an-input logic block) and the XOR tree propagates any single fault to its output, any single functional fault in the used logic blocks are propagated to the output of the XOR tree and is detected. Functional faults is any fault that changes the truth-table of an LUT, including stuck-at faults.
dent ARCH and SV type specifications given appropriate stationarity condi- tions. The theoretical developments are described in a series of recent papers, see in particular Kokoszka and Leipus (1998, 1999, 2000) and Lavielle and Moulines (2000). So far only limited simulation and empirical evidence is reported about these tests. We enlarge the scope of applicability by suggest- ing several improvements that enhance the practical implementation of the proposed tests. This paper focuses on the Kokoszka and Leipus (2000) and Lavielle and Moulines (2000) tests and proposes three types of extensions. First, we find via simulations that the VARHAC estimator proposed by den Haan and Levin (1997) yields good properties for the CUSUM-type estima- tor of Kokoszka and Leipus (2000). Simulation evidence is also presented for the application of this test to the multiple breaks setting using a sequential sample segmentation approach similar to that of Inclán and Tiao (1994). Second, the series used in the tests so far are either squared or absolute re- turns. We suggest the application of these tests to more precise measures of volatility, including the high frequency data-driven processes studied by Andersen et al. (2001), Andreou and Ghysels (2002), Barndorff-Nielsen and Shephard (2000), among others. Third, the finite sample performance of these new tests is assessed via extensive Monte Carlo simulations for realistic univariate GARCH models, single and multiple breaks as well as different algorithms and information criteria for the multiple breaks case.
Recently, some researchers (Morchid et al., 2014a; Morchid et al., 2014b; Esteve et al., 2015) have worked on topic identification for analyzing human-human dialogues. Although they don’t aim at building components in dialogue systems di- rectly, the human behaviours learned from the con- versations can suggest directions for further ad- vancement of conversational agents. However, the problem defined in the studies is under the as- sumption that every dialogue session is assigned with just a single theme category, which means any topic shift occurred in a session is left out of consideration in the analyses.
Interface fluctuation effects have been investigated for the lattice-matched InGaAs/InAlAs single QWs with well widths of 7 and 15 nm. The excitation intensity- dependent PL shows that the 7-nm QW has a large, 15 meV, blueshift of the PL peak energy as the laser exci- tation intensity increases from 0.01 to 100 W/cm 2 . Temperature-dependent PL shows that the PL peak en- ergy of the 7-nm QW sample displays a blueshift at first and then a redshift with increasing temperature, where the magnitude of the blueshift depends on the laser exci- tation intensity. The lower the laser excitation intensity, the greater the blueshift. These observations are ex- plained by the localized fluctuations at the InGaAs/InA- lAs interface. The experimental and calculated results indicate that the thinner QWs are affected more by interface fluctuations, in particular, by the thickness fluc- tuation but not the composition fluctuation. This work indicates that it is very important to optimize the inter- face for achieving high-quality InGaAs/InAlAs QW het- erostructures. We should select different approaches to optimize the growth to achieve the best quality QWs based on different thicknesses of the designed InGaAs/ InAlAs structures.
Rey et al.  designed transmit prefiltering matrices for conveying as much of the desired signal power to the reference user as possible, and as little interference to the remaining users as possible. In case of the unrealistic simplifying assumption of having perfect channel infor- mation for all the users at the transmitter even before transmission it becomes plausible that the multiuser transmitter has the ability to separate the individual users’ transmitted signals as well as to activate that particular TVTBR-AOFDM mode, which is capable of maintaining the target BER. However, in reality perfect noncausal multiuser channel information is unavailable and hence the authors aimed for minimizing the effects of channel estimation errors. A range of challenging open research problems can be found by considering the related design options. First, in time division duplex (TDD) systems the up-link (UL) and down-link (DL) signals are transmitted on the same frequency and hence they are likely to have a similar FDCHTF, which allows the transmitter to assume that the FDCHTF about to be experienced by the transmitter is similar to that estimated on the basis of the received signal. Another design option is to explicitly signal the FDCHTF from the remote receiver to the transmitter using for example high-compression vector- quantization. A third design option is to use long-range channel prediction for predicting the FDCHTF about to be experienced in the future based on previous FDCHTFs explicitly signaled by the receiver to the transmitter of the TVTBR-AOFDM system. The structure of the transmit correlation matrix was studied by Sampath et al.  with the aid of field trial results gleaned from a MIMO OFDM system and these results could be beneficially exploited for designing MIMO-aided TVTBR-AOFDM schemes.
quency regime of 50– 1000 Hz, as most of the applications of pyroelectric infrared detectors are in this frequency range. Figure 2 shows the frequency dependence of the dielectric constant and loss for poled Fe-doped PMN–0.38PT single crystals at room temperature. For the 0.2 mol % Fe-doped single crystal, as shown in Fig. 2 共 a 兲 , the dielectric constant and loss at 50 Hz are about 310 and 0.0067, respectively. They are almost invariable until 1 kHz. The dielectric con- stant reported at 1 kHz for a pure PMN–0.38PT single crys- tal is ⬃ 700, 16 which is much higher than that of the 0.2 mol % Fe-doped sample. No value of the dielectric loss for a pure PMN–0.38PT single crystal has been reported. The result shows that the dielectric constant of a PMN–0.38PT single crystal is controlled successfully by doping with a small concentration of iron ions. For a higher doping con- centration, the dielectric constant of the 1.0 mol % Fe-doped single crystal is about 725 at 50 Hz and decreases slightly with frequency increasing. The dielectric constants are simi- lar with those of an undoped single crystal, but its dielectric losses are very high, which are more than 0.16 at 50 Hz. Though the dielectric loss decreases sharply with frequency increasing, it is still about 0.02 at 1 kHz, much higher than that of the 0.2 mol % Fe-doped one.
The accuracies achieved by the Same Participant - All Task models were on par with those seen in the SP-ST configuration. In fact, the mean ANN accuracy for SP-AT was higher than the mean accuracy of both classifier types in the SP- ST configuration. This high accuracy suggests that relevant patterns existed for participants independent of the particular task, at least for the two tasks in the study. The higher variation in accuracies between participants suggests that the tasks elicited similar responses for some individuals but less so for others. This configuration could be useful in a real world application, particularly with such high accuracy, and it means a single model could be created and used for an individual, rather than a new model being required for each task that the individual performs. The accuracies achieved in the All Participant - All Task configuration were similar to those seen in the AP-ST models. This is not surprising as it was expected that the AP-AT models would be upper bounded by the accuracies of the AP-ST and SP-AT models, as AP-AT is the combination of the other two configurations. The high accuracy achieved by the random forest model shows the model did almost as well as single participant, single task models reported in previous studies  and even did better than other reported models . The ANN model did not do as well as the random forest model, however it still achieved results similar to models in previous studies. The discrepancy between the ANN and the random forest accuracies is likely a continuation of the effect seen in the AP-ST configuration.