Showing posts with label seismic. Show all posts
Showing posts with label seismic. Show all posts

Tuesday 15 April 2014

Direct Hydrocarbon Indicators

Seismic Methods & Hydrocarbon Detection

Explorationists have long dreamed of a device or technique that they could use at the earth's surface to get direct indications of deeply buried hydrocarbons. Over the years, scientists, inventors, entrepreneurs -- and not a few eccentrics -- have used tools ranging from divining rods to gas sniffers to "black boxes" in pursuit of this dream. They've analyzed soil samples, studied vegetation and taken pictures of the earth. They've walked, driven, flown and even sent up satellites. But their efforts have generally gone unrewarded. True, there are documented cases where hydrocarbons at depth have been successfully measured at the surface. But in these cases, some of the hydrocarbons leaked to the surface along fault planes and were detected by geochemistry or geobotany analyses. The ultimate tool still alludes us.
The only "reliable" tool we have for measuring the subsurface at a satisfactory density is the seismic method. So let's see if and how we can use the seismic method as a direct hydrocarbon indicator.
With ever-increasing resolution and clarity, seismic exploration technology is moving beyond the role of simply imaging the subsurface. We can now discern patterns in a processed seismic section that tell us much about a structure's nature and geometry. This is well-documented in the classic AAPG Memoir 26, Seismic Stratigraphy (Payton, 1977) and more recently, in AAPG Memoir 42, Interpretation of 3-Dimensional Seismic Data (Brown, 1991). The logical next step, of course, would be for a seismic-derived image to be presented directly -- not only in terms of geometry, but in terms of lithology, fluid content and other parameters which bear on reservoir potential.
The Encyclopedic Dictionary of Exploration Geophysics defines a direct hydrocarbon indicator as "a seismic measurement which indicates the presence of hydrocarbon accumulation" (Sheriff, 1973). These measurements may be in the form of "bright spots", "dim-outs", flat spots, polarity reversals, velocity sags, frequency changes, increases of amplitude with offset and P- versus S-wave ratios, among others. We commonly abbreviate "direct hydrocarbon indicator" as DHI or HCI.

Seismic Problems

In identifying a potential reservoir, we need to define such parameters as
  • geographic location
  • shape, thickness and depth
  • rock type
  • porosity
  • permeability
  • type of pore fluid
Seismic data can help us determine these parameters in anticipation of the drill bit, but only when we have other, more direct subsurface information (i.e., well data). Thus, we must begin to systematically build information bridges, by which seismic data will provide the answers it promises. Wellbore-to-seismic correlations and geoseismic modeling are among the primary tools for such syntheses.
Industry pundits unanimously agree that "advanced seismic technology" is essential for profitability and even for survival in the face of relatively unpredictable oil and gas prices. Unfortunately, the marketing and presentation of seismic products and services offers a bewildering variety of recently developed processes, procedures and practices, including AVO, DMO, VSP, 3-D, 2 1/2-D, turning waves, multi-component, shear waves, inversion, tomography, and so on.
Of these processes, it is clear that DMO processing and 3-D seismic data have attained to the status of routine or nearly routine technology. Others are still considered research or quasi-research tools. But it is not clear how any one such piece of technology relates to any otherÉnor what it is intended to provideÉnor where the whole ensemble is going.
In addressing this topic, we need to single out practical and currently advanced seismic technologies that can be of benefit in finding and defining hydrocarbon reservoirs. The first step is to identify and examine seismic responses that may be related to the presence of hydrocarbons. We know these as bright spots, flat spots, dim spots, time sags, velocity slow-downs, amplitude variations and so on.
At present, there are two mainstream applications of seismic methods:
  • finding new reservoirs
  • better defining known reservoirs, and working out and away from them
The latter category of methods is the more recently developed one, and has given rise to the term development geophysics, which is to be distinguished from the former (and more familiar) category of exploration geophysics. Both applications draw on the same seismic technology, although they do so in different ways.
From our approach to seismic technology, it is likely that new reservoirs will be found and defined principally because we are now able to solve key seismic technical problems. Hence, discussions of seismic developments should relate to such problems and the nature of the reservoirs which have been overlooked. At the same time, we should look more closely at what seismic methods might teach us about such reservoirs and their properties both now and in the future.

Historical Perspective

Anyone involved with seismic technology -- whether in acquisition, processing, interpretation, or merely as an investor; and whether this involvement has lasted for two years, or twenty years -- has seen many changes in seismic technology. It has certainly not been a static technology; instead, it is overwhelmingly dynamic. In fact, it will "leave you in the dust" if you don't keep up with it.
Let's look at some recent developments in seismic technology as they relate to direct hydrocarbon indicators.
1960s and 1970s: the new geophysicist
In the late 1960s, the seismic method began to feel the impact of the computer revolution. There were dramatic changes in acquisition methods; seismic data processing developed; and common depth point (CDP) technology was born (Mayne, 1962). The late sixties also saw the birth of the seismic specialist. Where previously, the geophysicist stayed out in the field and was responsible for the total product from acquisition through interpretation (we affectionately know him as a "doodlebugger"), the "new geophysicist" specialized in acquisition...or processing...or interpretation. From this time on, only the acquisition expert went out to the field; the processor stayed at the computer center, and the interpreter stayed in the office.
In 1973, Tucker and Yorston released their classic work Pitfalls of Seismic Interpretation. This pamphlet presented twenty-three examples describing a series of problems that an unwary seismic interpreter might encounter to great regret. While the specific seismic data quality for the illustrations left something to be desired, the messages were clear. The pamphlet presented guidelines both for understanding each of the effects and further making a correct interpretation. Of special interest was the fact that all the interpretive difficulties were placed into just three categories:
  • · velocity
  • · geometry
  • · recording and processing
This pamphlet places the responsibility for addressing these pitfalls squarely on the interpreter, based on his or her skills, experience and intellect -- it offers few "crutches" beyond doing careful work and gaining as much experience as possible.
It is interesting that only three years later, Neidell expressed a somewhat different interpretive philosophy, which the AAPG later documented in a set of published course notes (Neidell, 1984). Based on this work, we can set forth an interpretation procedure for handling seismic data that begins with a processed seismic section. For the moment, we will suspend all questions or doubts about the effectiveness of the data processing, and interpret the section based on the following assumptions:
  • · Each trace of the section represents only primary reflections from the subsurface, having locations immediately below where we have shown them to be plotted.
  • · Individual reflection events can be identified, and their amplitude is diagnostic of the change in acoustic impedance across the boundary causing the reflection.
In this ideal world, we may simply state the objectives and procedures of both seismic and stratigraphic interpretation. Figure 1(relationship between lithology, propogating wavelet and seismic response) shows a portion of a lithology log and a corresponding acoustic impedance log.
Figure 1
Figure 1
Each contrast in acoustic impedance is marked by a reflection event having a simple waveform. The polarity or sense of the reflection and its size indicate the nature of the contrast. Individual reflection events for the model are shown, along with their summation in the resulting seismic trace.
Interpretation begins, then, with the development from each trace of an acoustic impedance log or, equivalently, a reflectivity series. We correlate these results trace-to-trace to provide the structural considerations, and correlate them also with whatever geologic information is available. Using geological principles and insights appropriate to the region, we infer lithologic estimates, and from these estimates and the indicated trace-to-trace changes, geometry and depositional patterns, we interpret sequences and history.
The question of the wavelet's shape is worth considering, as Figure 2 demonstrates.
Figure 2
Figure 2
As we can see, the same sequence of lithology viewed as an ideal normal incidence synthesized seismic trace is extraordinarily difficult to interpret on the basis of untreated waveforms.
Motivation for this approach followed from new developments in estimating seismic waveforms and converting them effectively to the simple zero-phase symmetric form shown in Figure 3 .
Figure 3
Figure 3
The individual superimposed waveforms are also shown to provide some reasonable perspective for understanding the superposition.
As with many significant developments, the control of seismic wavelets only represented one added element in the extraction of subsurface information from seismic data. In fact, during that same period, the role of seismic modeling as a quantitative bridge to subsurface parameters, and the use of seismic patterns for discerning lithology, depositional setting and other stratigraphic components was introduced in systematized form by Vail and the Exxon school of seismic stratigraphers (1977). In Figure 4, regional geology and borehole measurements are presented in relation to their expression as patterns which may be seen in the seismic view.
Figure 4
Figure 4
In the 1970s, we learned that (1) the pitfalls which might be encountered might have analytic treatments via seismic data processing or special field practices, and (2) that more information might be recoverable from seismic data than previously appreciated. The use of seismic patterns for interpretive purposes and the lessons from seismic modeling testified for the second goal.
As noted earlier, a respect for the information contained in the seismic response evolved. When particularly learned, the value of seismic amplitude and that retention of accurate amplitude information is a must. Rather than arbitrarily increasing the amplitude to maximize the structural content, we learned that maintaining "true" amplitudes (also called "relative" amplitude processing or RAP) could tell us much about certain qualities of the subsurface. And we found that when we did this, we noticed anomalously high amplitudes, or "bright spots," that we equated to qualities of oil and/or gas.

1980s - A critique

In the 1970s, we noticed the importance of bright spots and built a well-defined link between seismic interpretation and the role of data processing (and, to a lesser extent, acquisition). The 1980s saw increased efforts to perfect seismic imaging and extract as much subsurface information as possible. It became clear, for instance, that in an appropriate geologic setting, a porous, gas-filled Pleistocene or Pliocene sand on a properly processed and imaged seismic section would be readily recognizable. For example, we can identify an offshore Gulf Coast sand in Figure 5 .
Figure 5
Figure 5
The prominent trough denoting the sand top is labeled as is the gas-water contact and indicated by the essentially flat sequence of strong peaks.
For this same data, Dedman, Lindsey and Schramm (1975) interpreted most of the sands in the subsurface column between 1.0 and 1.8 seconds. Similarly, sands were identified by means of well log measurements, and plotted on a two-way travel time scale for direct comparison with the seismic data. The results are shown in Figure 6 .
Figure 6
Figure 6
In assessing the remarkable agreement, it is important to note that the top of a sand corresponds in each case to a trough, negative amplitude or "white event", while the base is defined by a peak, positive amplitude or "black event."
Hence, the waveforms inherent in our seismic data are more amenable to interpretation once we manipulate and transform them to simpler waveforms. We can now accomplish these transformations, in principle, for all reflection seismic data, and attain correlations between seismic data and geological inputs to produce more definitive results.
Figure 7 demonstrates the improved correlations of seismic images with subsurface data, this time using an elementary model, the synthetic seismogram.
Figure 7
Figure 7
A seismic section over a North Sea oil field has been separated at the well location, and several repetitions of the synthetic seismic signature computed from the velocity and density logs have been inserted. A common waveform is used in the actual processed data and the synthetic traces. Such agreements clearly enable correlations to be made which enable one to work away from the well control with a good degree of confidence.
Let's now look at a processed seismic section from the offshore Texas Miocene ( Figure 8).
Figure 8
Figure 8
In this case, the Miocene sands are associated with peak reflections, as distinct from the previous association of reflections from sands as troughs. The fact that sand reflections may be signaled by both peaks or troughs is rarely mentioned in the literature describing seismic technology (Rutherford and Williams (1989), Neidell and Berry (1989), and Neidell and Lefler (1992)). It certainly was not mentioned at all prior to 1989. Obviously, such a fundamental matter deserves to play a role in our thinking relating to seismic data, since some 50 percent of global hydrocarbon production relates to sands and sandstones.
The prominent GulfCoast gas sand of Figure 5 could also have been confirmed via seismic modeling as a companion to the log correlations performed by Dedman et al (1975), which were also previously shown in Figure 6 . For such a study, we could apply two-dimensional seismic modeling.
In Figure 9(synthetic seismograms developing a two-dimensional model of partially gas gilled sand anticline - band pass zero phase symmetric wavelet versus actual contractor wavelets A and B),
Figure 9
Figure 9
the model shows a mildly structurally closed sand unit of relatively uniform thickness. The upper 60 ft [18 m] of this 120 ft [36 m] sand is gas-saturated. Field seismic data exhibited a "bright spot" about the amplitude relief shown in the figure. This model was originally computed with theoretically derived rock velocities and densities for the sand and shale values from data calibrated to nearby well logs. An amplitude increase of only 25% was obtained using these values, and the model was altered to have the values shown in the figure. These were derived by assigning a .04 reflection coefficient to the shale-water sand interface and adjusting the gas sand velocity to provide the amplitude increase shown on seismic data. Densities were included in the reflection calculation and generally follow Gardner's equation. For a detailed discussion of Gardner's investigations, refer to Carter and Siraki (1993).
First, we computed model responses for two different but documented marine basic wavelets. These might be thought of as coming from two different contractors. It is not our position to choose between these two wavelets. It is apparent that the seismic sections look quite different, and if they were members of a grid of data on the same prospect, it would be difficult to tie them.
For conventionally processed data, it is generally unreliable to attempt picking the top and bottom of the gas-sand for thickness estimates. However, it would be possible to detect the probable presence of the sand and to place it on the map. Wavelet B might leave us wondering if we have one sand or two, while wavelet A causes us to wish for some higher frequencies in the hopes more detail as to the exact stratigraphy could be seen.
Both wavelets A and B have a bandwidth approximately equal to the 8-32 Hz response shown in the figure. Thus, this model response is what each of the other two could have been converted to, with appropriate processing. No problems of tying data between the two data sets would then exist.
We should in fact have become quite suspect at this point, in that even this simple application of modeling required us to force parameter values to achieve an acceptable fit. Strictly speaking, our model should also have included multiples, as the North Sea synthetic seismogram should have ( Figure 7 ). Experience has taught us that including all multiples in model calculations (of any type) degrades our fits substantially, although from time to time we can document the presence of a multiple or two.
Unfortunately, our quest for added information from seismic data forced us to face circumstances in which the available tools could not solve the problem. Figure 10and Figure 11 clearly illustrate this point.
Figure 10
Figure 10
These figures were developed from the Rocky Mountain area, and exemplify the problem of distinguishing between a coal seam and a partially gas-filled sand encased in a sandy shale.
Figure 11
Figure 11
Simulated time sections (with and without noise, using a fairly typical bandwidth) suggest that in practical terms, it is not likely that this distinction could be reliably accomplished using normal seismic displays and techniques.
The 1980s saw the introduction of new forms of HCIs. In the early eighties, seismic inversions became a popular means of directly identifying subsurface hydrocarbons. In this method, the seismic trace is converted to a synthetic impedance well log. When we apply inversion to a series of adjacent seismic traces, we then produce a synthetic impedance section. We interpret the impedance section as if we had a series of closely spaced acoustic, or sonic well logs.
The late 1980s also saw the use of amplitude data develop as an alternative. Using equations of energy partitioning, we began relating changes in amplitude as a function of distance from source to receiver (or offset) to changes in fluid type. Depending on the circumstances, we observed that seismic amplitudes either increase or decrease according to the fluid content. This type of HCI is called amplitude versus offset, or simply AVO. Using this method, the problems of discrimination shown in Figure 10 and Figure 11 could possibly be resolved.

View of the 1990s and beyond: the 3-D explosion

We may now consider making "ties" between seismic data and the subsurface as we know it from wellbore observations and measurements. There are many aspects to such an objective. One obvious one is the initial issue of imaging seismic data most effectively. We must also examine how we may best use such displays. Currently, we are experiencing an explosion in seismic technology in the form of 3-D data. This new technology, along with advances in computer workstations, is forcing us to learn new ways to operate, process and interpret seismic information. It is important to clearly appreciate the relationship of 3-D technology to 2-D methods. Fortunately, this is a simple matter, and involves only a straightforward extension of 2-D technology.
At the same time, we must clearly define (even on a global basis) the geological relationships between lithology and their characteristic reflections. Also, we must carefully scrutinize the models we employ and the theoretical equations on which we rely. We have already seen major discrepancies between such models and the behavior they predict (for example the prediction of the presence or absence of multiple reflections and the ability to predict seismic amplitude levels). If there is to be real progress, we cannot tolerate blind spots of such basic importance.
Just as the early 1990s brought us to new high resolution in the horizontal domain in the form of 3-D seismic, we will see new revolutions in the vertical domain in the late 1990s. At that time, no longer will a 2 ms sample rate and 20 ft resolution be the norm, but we will routinely see 0.1 ms data and 1 ft vertical resolution.

E-mail Newsletter

Sign up now to receive breaking news and to hear what's new with us.

Recent Articles

© 2015 Science Center. WP themonic converted by Bloggertheme9. Powered by Blogger.
TOP