Top PDF Permeability estimation from time-lapse seismic data for updating the flow-simulation model

Permeability estimation from time-lapse seismic data for updating the flow-simulation model

Permeability estimation from time-lapse seismic data for updating the flow-simulation model

A pressure-up compartment highlights the faults surrounding the compartments (the compartment boundary). In fact, the pressure profile is a very smooth profile that does not respond to insignificant heterogeneities within the compartment. Therefore, it is more likely to show barrier connectivity around the compartments, in contrast to reservoir-matrix connectivity. The model used previously (SSM) now includes faults, and is used to test this further. Transmissibility multipliers indicating the connectivity of faults are assigned to the model (displayed in Figure 5.13(a)). Also, transmissibility values indicating the static sand connectivity are shown in Figure 5.13(b). Figure 5.13(c) to Figure 5.13(h) show the pressure-change and saturation-change profiles as the result of simulation for a sequence of time-steps (1999–1998, 2000–1998 and 1999– 1998). Note that in this time period, injectors CW16 and CW17 are not active. At first glance, it is observed that the pressure profile appears to differ completely from the previous synthetic model profile, since the faults and compartments significantly impacted on the pressure profile in this model. In fact, the introduced compartments are clearly projected to the pressure profile in this reservoir. Each image of pressure at a certain time-step can highlight most of the compartments. Note that, unlike the saturation profile, pressure diffusion is a quick process compared with saturation evolution; hence, it provides a suitable candidate with a full coverage of the reservoir for imaging the connectivity of the reservoir. Saturation change still follows the channel heterogeneity shown in the transmissibility map (Figure 5.13(b)). Therefore, establishing which of these connectivities (hydraulic sand connectivity or barrier connectivity) is the precise candidate to be estimated from 4D seismic, remains dependent on the fact that the 4D-seismic response is more influenced by the saturation- or pressure-signal. This is analysed further below.
Show more

254 Read more

Quantitative application of 4D seismic data for updating thin-reservoir models

Quantitative application of 4D seismic data for updating thin-reservoir models

compared with the impedances obtained from the simulation model through the petro- elastic modelling. This approach, albeit visually compelling, has drawbacks of the previous two: inversion to the impedances is likely to ignore the uncertainties, and petro-elastic modelling for the simulation model will need reliable petro-elastic model parameters. However, the domain of impedances appears to be the most popular in the history matching literature, see e.g. [22], [23], [24], [25], [26]. While there are reports of better history matching performance in this domain rather than e.g. in the amplitudes domain [27], it is still not clear whether the popularity of history matching in the impedances domain is due to its robustness, or because this domain is an acceptable compromise which is understood by both engineering and geophysical communities. There are also approaches which avoid the direct comparison (e.g. in the least-squares sense) of the observed and modelled time-lapse reservoir signatures, calculating instead some correlation measures between the quantities in question. Usually they are employed for the reservoirs where petro-elastic modelling is challenging. For example, Waggoner et al. [28] used the normalised cross-correlation between the observed and modelled maps of acoustic impedance for history matching of a Gulf of Mexico gas condensate reservoir. Kjelstadli et al. [29] employed the correlation between the observed and modelled attribute maps to history match a North Sea compacting chalk reservoir, where adequate seismic modelling was problematic.
Show more

268 Read more

An Intelligence Approach for Porosity and Permeability Prediction of Oil Reservoirs using Seismic Data

An Intelligence Approach for Porosity and Permeability Prediction of Oil Reservoirs using Seismic Data

Rock physics models explain crucial relations between reservoir parameters and seismic properties of reservoir rock and they are very important not only for a time-lapse seismic project but also for the reservoir characterization of one reservoir. In this work it was proved that the Duffy-Mindlin’s model works properly at porous reservoir, especially in case of presence of shale compaction In the case, we need to use not only empirical relations for constructing rock physics model but also a contact theoretical model for the calculation of dry moduli. Geertsma’s empirical relation is often used due to its consistency to Duffy-Mindlin’s model. Among theoretical models, Hertz-Mindlin’s theoretical model is the most popular one, since the other theoretical models are based on the model of Hertz- Mindlin. This paper has shown the modified Hertz-Mindlin model is a considerably more accurate predictor, compared with the modified Geertsma model, and it is more suitable for calculation of dry moduli. Also it is proved that Genetic Algorithms are a feasible technique for generating reservoir characterization using time- lapse seismic data. The method is capable of handling many parameters, which is critical when dealing with large full-field reservoir simulation models. This paper has proved the application of a Genetic Algorithm to a realistic case, with respect to main issues of model’s formulations for the reservoir characterizations.
Show more

8 Read more

Bayesian Updating of Seismic Fragility of Structures Using Time-History Simulations

Bayesian Updating of Seismic Fragility of Structures Using Time-History Simulations

The fundamental goal in the seismic design of any Nuclear Power Plant (NPP) is the minimization of radiological risk (release of radioactive emissions into the environment) and ensuring the operational safety immediately following an earthquake. It is critical that engineers be able to estimate the associated risks due to seismic events to a high degree of confidence. In the nuclear industry, this is done using a detailed probabilistic framework that carefully accounts for all possible uncertainties in the expected intensity of ground motions, the predicted response of the structures and ultimately, the chain of events that could lead to the catastrophic failure. Conventional industry practices estimate the seismic fragility of structures using simplified, aggregated methods which rely heavily on safety factors from design code, parameter values from past studies, expert judgment, experience data etc. These approximate approaches lead to highly conservative estimates of structural fragility curves with wide confidence bands. For newer structural systems being designed for future NPPs, additional fragility data often needs to be acquired for estimating fragility to a specified degree of confidence. Of course, experimental testing for fragility data on large scale systems in most cases is prohibitively expensive and impractical. For real- life complex systems with non-standard designs, one realistic way of estimating seismic risk of the system is to use a robust experimentally-validated Finite-Element (FE) simulation model and perform multiple dynamic analyses by considering uncertainties in material, modeling and loading variables. But this probabilistic approach to seismic risk estimation has one important drawback - it requires a large number of computationally intensive simulations to be performed for accurate estimates of risk. Typically, these constraints demand for a trade-off between the desired accuracy of the risk estimates and the amount of total fragility data that can be collected at a reasonable cost.
Show more

10 Read more

Automatic Counting of Canola Flowers from In-Field Time-Lapse Images

Automatic Counting of Canola Flowers from In-Field Time-Lapse Images

In some of the images, when the sky is clear, we can see that the sunlight hits directly the plants and the canola leaves are backlit. This causes that areas corresponding to canola leaves present a high yellow component. This can affect the entire flower detection process, as our method will rely on the characteristic yellow colour of the canola flowers to identify them. Figure 3.3 shows two different pictures taken on the same area at different times of the day. In each of the images, we applied a threshold on their CIELab’s b component (which gives us yellow-vs-blue intensity) and painted in red every pixel that passed that threshold. Our intention with this is to show that the CIELab’s b intensity of the canola leaves increases when direct sunlight hits them canola. However, when the sky is cloudy, we have diffuse lighting, as the sunlight is not hitting the field directly. This is the case of the left image in Figure 3.3, where the pixels that passed the threshold correspond to flower pixels. As we will see in further chapters, our method makes use of the CIELab’s b component to detect flowers. This means that having direct lighting from the sun might interfere with the detection of yellow flowers, thus producing false positives (i.e., detecting flowers where there are none).
Show more

85 Read more

From observation to the quantification of snow processes with a time lapse camera network

From observation to the quantification of snow processes with a time lapse camera network

Similarly, our approach uses the reflectance value of the reference board and the surface in the digital images for the calculation of the albedo. A problem is that we do not know the reflectance values of the used white reference surface and the response of the used digital camera for varying illumi- nation conditions. In order to estimate the reflectance val- ues of the reference white board in the digital images we used a Kodak Q13 gray card, which is a standard calibra- tion card with known reflectance values. We took images of the grey card and the white reference board (Fig. 5) under di- rect and diffuse illumination conditions. From these pictures we calculated the digital numbers (mean of the pixel RGB values) of the gray card patches and the white plastic board. Then we plotted the digital numbers against the reflectance values of the gray card patches and fitted a 3rd order poly- nomial function. With this function, we got reflectance re- lationships for the camera type used (Fig. 5) for direct and diffuse illumination conditions and we estimated an average function. Since there is an over-representation of dark gray card patches and since we were especially interested in ac- quiring accurate values for the lighter colours useful for the snow albedo determination we discarded the data from nine (8, 10, 11, 13, 14, 15, B, 17, 18, 19) of the 20 plots in or- der to get an equally weighted polynomial function over the whole gray card range. The 3rd order polynomial function was used for a calculation of the reflectance value of the ref- erence board, which was 0.66 for direct, 0.715 for diffuse
Show more

15 Read more

Shear-wave and spatial attributes in time-lapse 3-D/3-C seismic and potential-field datasets

Shear-wave and spatial attributes in time-lapse 3-D/3-C seismic and potential-field datasets

This dissertation addresses several topics united by a broad topic of analysing the shear- (S-) wave effects in time-lapse three-dimensional and three-component (3-D/3-C) seismic data. Seismic exploration using multicomponent recordings is the subject of extensive studies in both academia and industry providing key information about the properties of subsurface rock. While conventional (single-component) seismic imaging measures only the compressional (P) waves, multicomponent seismic data analysis utilizes multiple wave modes, such as shear (S), surface waves, and converted P/S modes produced upon transmissions or reflections on velocity and/or density contrasts. By analyzing and transforming multicomponent seismic records, additional seismic sections and volumes based on P and S waves can be produced. To extract and fully utilize the information contained in multicomponent data, extensive data analysis is required, including calculation of statics, velocity analysis, imaging, and extraction of additional attributes such as reflection amplitude variations with angle (AVA) or offset (AVO).
Show more

166 Read more

Estimation of the Probit Model from Anonymized Micro Data

Estimation of the Probit Model from Anonymized Micro Data

The demand of scientists for confidential micro data from official sources has cre- ated discussion of how to anonymize these data in such a way that they can be given to the scientific community. We report results from a German project which exploits various options of anonymization for producing such ”scientific-use- files”. The main concern in the project however is whether estimation of stochastic models from these perturbed data is possible and – more importantly – leads to reliable results. In this pa- per we concentrate on estimation of the probit model under the assumption that only anonymized data are available. In particular we assume that the binary dependent variable has undergone post-randomization (PRAM) and that the set of explanatory variables has been perturbed by addition of noise. We employ a maximum likelihood estimator which is consistent if only the dependent variable has been anonymized by PRAM. The errors-in-variables structure of the regressors then is handled by the simu- lation extrapolation (SIMEX) estimation procedure where we compare performance of quadratic and nonlinear (rational) extrapolation.
Show more

23 Read more

Time lapse: A glimpse into prehistoric genomics

Time lapse: A glimpse into prehistoric genomics

Mendelian disorders, accurate identi fi cation of overt and cryptic chro- mosome translocations, discovery of quantitative trait nucleotides (QTNs) and expression quantitative trait loci (eQTLs) and the study of long-range regulatory interactions. Such studies ultimately lead to in- creased e ffi ciency in food production and improved global food se- curity. Once such assemblies are built for numerous species, compara- tive genomics becomes possible in silico and identification of chromosome rearrangements not easily detected by basic karyotyping (e.g. cryptic translocations) is achievable by molecular cytogenetics. Chromosome-level assemblies are also essential to address basic bio- logical questions related to genome evolution e.g. the reasons why chromosomes break and re-form (and why sometimes they don't) as well as for understanding the significance and genomic correlates of chromosomal breakpoint regions and the reasons why blocks of genes (homologous synteny blocks) are maintained together during evolution. Far more than simply a descriptive science therefore, cytogenetics provides a backbone for the visualization of any genome, a means through which we can understand the relationship between genome and phenome more fully and a route to comparative genomics from a whole genome perspective. Comparative genomics, in turn, permits the establishment of overall genome structure of less well described species (by comparison to those better described) and the mapping of gross genomic changes that led to each species' characteristic karyotype. The purpose of this review is to summarise how we studied chromosome- level assemblies of bird species and thereby provided novel insight into the karyotypes of the avian forebears - the Theropod dinosaurs.
Show more

6 Read more

Updating   Fuzzy Models for Seismic Risk Assessment

Updating Fuzzy Models for Seismic Risk Assessment

An approach of type (iv) could be said to be purely fuzzy. Such a double fuzzy model for the seismic damage prediction is proposed in [14]. The approximation of pdf f by a[r]

8 Read more

Impact of updating land surface data on micrometeorological weather simulations from the WRF model

Impact of updating land surface data on micrometeorological weather simulations from the WRF model

Results demonstrate an explicit influence of the initial land boundary conditions (e.g., land cover, terrain and LAI) on micrometeorological weather fore- casts, especially surface temperature and humidity. Several cases of simulations were attempted to study the varied impact of these variables on a calm day, a rainy day and for an extended time-period. Over- all, improvement factors in the range 15-30% are observed in the quality of the short-term prediction of micro-meteorological elements like 2-m surface temperature, RH, wind speed and solar radiation. Furthermore, the study has illustrated that selecting an optimum resolution and the interpolation methods applied while representing boundary conditions at model resolution are vital to improve the mesoscale model performance. Advanced interpolation tech- niques used in this study helped in preserving critical land-uses at the model’s resolution, which is usually lost while upscaling datasets. The modified run per- formed consistently well for time scales extending from 12 h to seven days, which shows that the model ingested the modified datasets without experiencing any shock. Thus, such methodology is feasible for forecasting micrometeorological weather at different spatial and temporal scales. The modification of land state parameters had a higher influence on near-sur- face temperature, RH and solar radiation simulations. However, significant overestimation of wind speed is still noticed in the modified run, which demands further experimental study. The model outperformed the control run and simulated near-surface weather exceptionally well in vegetated areas. Realistic
Show more

19 Read more

Time lapse: A glimpse into prehistoric genomics

Time lapse: A glimpse into prehistoric genomics

Last year (O’Connor et al. 2018c) we applied a comparable approach to recreate the most likely ancestral karyotype of diapsids. Using a combination of a bioinformatics and molecular cytogenetics we developed a FISH (BAC) probe set that would hybridise directly across species that diverged hundreds of millions of years ago (Damas et al. 2017). The BACs used gave strong hybridization signals to turtle (figure 1) and some Anolis carolinensis (lizard) chromosomes and those of two turtles Trachemys scripta (red earned slider) and Apalone spinifera (spiny soft-shelled turtle). Although these two turtles do not have chromosome-level assemblies, molecular cytogenetic analysis allowed us to anchor the series of events from the perspective of a bird-turtle ancestor. A combination of this molecular cytogenetic approach and bioinformatics allowed us to recreate the inter- and intrachromomsomal changes that occurred from the ancestral diapsid ancestor, to the archelosaur (bird-turtle) ancestor (Benton et al. 2015), through the theropod dinosaur lineage to modern birds.
Show more

17 Read more

Travel time estimation from fixed point detector data

Travel time estimation from fixed point detector data

Test vehicle techniques have been used to measure travel time since the late 1920s. As the most common travel time collection method, these techniques (often referred to as floating car techniques) measure travel time by having an active, driven vehicle in the traffic stream as an average car, floating car or maximum car. Depending on the instruments used, the travel time measurements can be taken in three different ways. The first and traditional method is the manual method, the so-called traditional floating car method. This manual method requires a driver to operate the test vehicle and a passenger to record the travel times at upstream and downstream stations using a clipboard and stopwatch at the same time. The second method improves the manual method by integrating an electronic distance measuring instrument (DMI) into the test vehicle technique. This method determines the travel time from the speed and distance information recorded by the DMI. The third test vehicle method uses a Global Positioning System (GPS) in the test vehicle. The GPS has recently been utilized to measure travel time. In the test vehicle, a GPS is connected to a portable computer to collect the vehicle trajectory information, which can then be used to determine the travel time. Although these test vehicle techniques, such as those that use a DMI, are cost effective, their accuracy is limited due to few or even only one measurement per time interval, as well as due to the possible error from the driver’s judgment of the various traffic conditions. Furthermore, these test vehicle techniques are time-consuming, labor-intensive and expensive for collecting sufficient data (Vanajakshi 2004).
Show more

302 Read more

Soil hydraulic material properties and layered  architecture from time lapse GPR

Soil hydraulic material properties and layered architecture from time lapse GPR

on the TDR data (Table 3). Essentially, there are three main reasons for this. First, by evaluating the travel time of reflec- tion (V), integrated water content is included in the inver- sion. This also comprises the compaction interface (i) which is not represented in the model. At the beginning of the ex- periment, the amplitude of this reflection is comparable to the amplitude of the reflection originating from the interface of material A and C (V). Notice that the amplitude of the re- flection (i) does not vanish, but merely decreases when the material is saturated at the end of the experiment (Fig. 14). This indicates that this reflection originates from changes in both the small-scale texture of the material and the stored wa- ter content at the beginning of the experiment. Hence, since this compaction interface is not represented in the model, the resulting θ r C is increased, coping for this representation error. Second, a deviation in the position of the groundwater table with reference to the antenna position at the surface can be partially adapted by changing θ r C . As the position of the sur- face is subject to change over the years, the measurements of the groundwater table are referenced to a fixed point at the end of the groundwater well, leaving the exact position of the surface relative to groundwater table uncertain. According to Buchner et al. (2012), the accuracy of the ASSESS architec-
Show more

23 Read more

Automatic detection of calving events from time lapse imagery at Tunabreen, Svalbard

Automatic detection of calving events from time lapse imagery at Tunabreen, Svalbard

Despite inherent limitations, the method can still be im- proved. First, the camera settings and camera installation have an impact on the result and can be optimized. For the set-up in 2014, the large aperture setting (f/2.8) and the lack of a polarized filter rendered more pictures compared to 2015 falling into the HI category. It is important to consider all of the situations that the automatic settings of the camera will have to handle. Also, instead of fixing the aperture width, one could fix the aperture speed to let the lens adapt to the amount of light available or adapt the ISO, as in Kwasnitschka et al. (2016). The position of the camera also influences the re- sults, and our two different camera locations show both ben- efits and drawbacks. A camera positioned at low elevation and in front of the glacier (2014) is favourable for segmen- tation (refinement has to be done less often), whereas a cam- era situated more to the side and at higher elevation (2015) is favourable for recognizing larger events, but parts of the front may be occluded. In some images, calving events are recorded at different locations, and having a shorter time- lapse interval (to be set according to the characteristics of the glacier) could help to distinguish single events. Storing the images in lossy file format (like .jpg) also certainly de-
Show more

15 Read more

Segmentation of the vertebrate hindbrain: a time lapse analysis

Segmentation of the vertebrate hindbrain: a time lapse analysis

The outline of the neural tube was clearly visible in the brightfield images using a 10X objective which gave a field of view (~1000 µm x 750 µm) large enough to see details of the outline of up to 6 rhombomeres. We visualized the complete neural tube segmentation process in two overlapping time- lapse sequences, starting with 6-8 ss embryos (n=12) and 9-11 ss embryos (n=12); each sequence contained 10-15 h of recorded time-lapse video imaging of hindbrain segmentation. Because there was a loss of contrast in the explant tissue after 12 h it was difficult to visualize the precise outline of the neural tube and rhombomere boundaries for longer time periods. In each time-lapse video image, the outline of the neural tube appeared in a 2-D horizontal plane representing a coronal section of the neural tube, so that the measured shape changes represented the lateral width dynamics of the neural tube. We measured the lateral width of the neural tube as the distance between the pial walls of the neural tube, at the axial level of mid- rhombomeres and rhombomere boundaries. During the first 10-h se- quence of brightfield time-lapse imaging, we followed the maximum lateral width of the neural tube at r1, r3 and r4, and b2/3 and b3/4. During the second 10-h sequence of imaging we measured the lateral width of the neural tube at all the major rhombomere levels; r1-r6, b2/3, b3/4, b4/5 and b5/6. Because the morphological features of b1/2 and b6/7 are very subtle, their widths were not measured. Measurements were made from tracings on a clear transparency placed over the video monitor. The rostrocaudal axis of the neural tube and the axial level of each mid-rhombomere and rhombomere boundary was marked on the last image in the series (t=10 h). The shapes of the rhombomeres and the boundary regions were more discernible at this time point. Working backwards in time, we measured the neural tube lateral widths at 1-h intervals. Using the identifed mid- rhombomere and rhombomere boundary locations, we also measured the rostrocaudal distances between these axial levels at t=0 and t=10 h to note if there were any changes in the length of the hindbrain from its segmen- tation.
Show more

8 Read more

360° Time Lapse - A New Leap in Time Lapsing with Rotation

360° Time Lapse - A New Leap in Time Lapsing with Rotation

This screen is the second screen which is enabled once the user selects the time lapsing session to start as shown in Figure 7. The screen is a dynamic which can provide visual feedback to the user. The screen contains a progress bar which shows the percentage of Time Lapse completed. The only user interaction available in this screen is the cancel button. There are two exit strategies available from this screen. The user can manually override and select cancel. This screen times out when the time lapsing session ends. The time lapsing session time out is calculated by the parameters selected in the main screen.
Show more

9 Read more

Motion denoising with application to time-lapse photography

Motion denoising with application to time-lapse photography

We have introduced motion denoising – the process of decomposing videos into long- and short-term motions, al- lowing motion resynthesis in a way that maintains one and removes the other. We showed how motion denoising can be cast as an inference problem within a well-defined data- driven formulation that does not require explicit motion es- timation. This allows the algorithm to operate on videos containing highly involved dynamics. We also showed that Loopy Belief Propagation is a suitable machinery for per- forming this inference. Time-lapse videos fit particularly well within this decomposition model, and we presented a novel application whose goal is, given an input time-lapse sequence, to synthesize a new one in which the typical short term jittery motions are removed whilst the underlying long term evolution in the sequence is maintained. We presented results on a set of challenging time-lapse videos. Our tech- nique successfully generates filtered timelapse sequences that are visually faithful to the long-term trends and allow a better grasp of the underlying long-term temporal events in the scene.
Show more

8 Read more

Estimation of permeability of a sandstone reservoir by a fractal and Monte Carlo simulation approach: a case study

Estimation of permeability of a sandstone reservoir by a fractal and Monte Carlo simulation approach: a case study

Abstract. Permeability of a hydrocarbon reservoir is usually estimated from core samples in the laboratory or from well test data provided by the industry. However, such data is very sparse and as such it takes longer to generate that. Thus, esti- mation of permeability directly from available porosity logs could be an alternative and far easier approach. In this pa- per, a method of permeability estimation is proposed for a sandstone reservoir, which considers fractal behavior of pore size distribution and tortuosity of capillary pathways to per- form Monte Carlo simulations. In this method, we consider a reservoir to be a mono-dispersed medium to avoid effects of micro-porosity. The method is applied to porosity logs obtained from Ankleshwar oil field, situated in the Cambay basin, India, to calculate permeability distribution in a well. Computed permeability values are in good agreement with the observed permeability obtained from well test data. We also studied variation of permeability with different parame- ters such as tortuosity fractal dimension (D t ), grain size (r)
Show more

10 Read more

Assessment and updating of the fortification model from 2006

Assessment and updating of the fortification model from 2006

and for the minerals calcium, magnesium, zinc, copper, iodine and selenium. For nutrients where no UL values are available from SCF/EFSA or NNR, the Danish authorities have applied so-called GL values. The Danish assessment is to a large extent based upon the report Safe Upper Levels for Vitamins and Minerals, 2003 from the UK Expert Group on Vitamins and Minerals (EVM). This report can be viewed on the website of the Food Standards Agency, UK http://www.food.gov.uk/ . In the (Danish) model, GLs that correspond to the conclusions of the UK report are used for riboflavin, vitamin B 12 , biotin, pantothenic acid and
Show more

19 Read more

Show all 10000 documents...