Spring 2009 Distinguished Lecture Integrating seismic acquisition and processing Jack Bouska, BP Corporation (Editor's note: The Distinguished Lecture (DL) Program is an active effort to promote geophysics, stimulate general scientific and professional interest, expand technical horizons, and provide a connection to SEG activities and practices. Jack Bouska's DL tour began in January 2009 and will continue through July. For more information about Jack, his lecture, and tour dates, please click here.) Early in each lecture, I pose a question: “Who here likes seismic?” As you might expect, I invariably receive a flurry of raised hands in response. Naturally, I like seismic too, and for a variety of reasons: I enjoy hypothesizing about the reservoir geology, and I’m equally passionate about designing the best and most cost-effective acquisition experiment to image the subsurface. I’m also fascinated by working the data-processing and analyzing the images—for comparison against, and then revision of my original beliefs about the reservoir. In those two sentences, I’m really just describing a practical implementation of “the scientific method” for seismic surveying. But those brief comments also succinctly illustrate my view on how acquisition, processing, and interpretation might be integrated in a single system to produce better seismic images within realistic budgets. The opportunity for involvement in a project through the full cycle of design, field acquisition, processing, and interpretation comes perhaps only once or twice in a geophysicist’s career. Finding individuals with skill and experience in all three areas is even rarer. Fortunately, this need not act as a barrier to better integration of our acquisition designs and subsequent processing flows, given sufficient communication across disciplines, coupled with insight into new possibilities. The benefits we expect from integration are lower project costs and improved quality of the seismic image. Adjusting surface sampling (station spacing and fold) to match the reflection target-horizon requirements instead of oversampling noise can achieve significant cost savings during acquisition, but will also require that we have some method to attenuate aliased noise in processing. Prestack dip filters, such as f-k filtering and even prestack migration itself, effectively attenuate most forms of coherent noise, with the caveat that noise (and signal) must be continuous and unaliased. The requirement of tighter spatial sampling for prestack dip filtering drives acquisition costs higher, but do we always need to pay more to improve quality? Traditional noise-attenuation techniques, such as f-k filtering, are multichannel velocity-discrimination dip filters, which perform best when spatial sampling is set fine enough to minimize aliasing artefacts in the transform domain. In areas with slow wideband surface-noise trains, the surface shot and receiver station intervals must be very small (meters), resulting in very expensive field acquisition. Prestack gathers, such as shot, receiver, or cross-spread, all exhibit the same level of geologic dip as seen on a stack section, so 3D f-k filtering in these domains will have a risk of attenuating reflection data along with noise. The practice of tight spatial sampling and f-k filtering represents a very common variety of acquisition and processing integration; unfortunately, it’s not well optimized in terms of either cost or quality. The prestack-migration operator itself is an excellent wave-equation-based dip filter that passes subsurface geologic dips while rejecting (unaliased) source-generated coherent noise, such as ground roll, refraction reverberation, and backscatter. Unfortunately, prestack migration will also pass any nongeologic coherent dip, such as the f-k filter impulse responses left imbedded in the data, which is another reason to avoid multichannel filter application ahead of imaging. The best option for any prestack single-fold sort domain is to use a class of filtering which is dip independent and has a minimal spatial impulse response. Standard despike amplitude discrimination and 3D FXY deconvolution are two examples of noise attenuation with compact operators which are useful in preconditioning data for prestack migration. Historically, 2D FX decon has had limited effectiveness as a prestack noise attenuator; 3D FXY decon, however, can be successfully applied to attenuate noise in a variety of prestack single-fold sort gathers, such as common-receiver, cross-spread, common-salvo, and common-offset. 3D FXY decon can also be cascaded with despike to increase performance. Sorting from one gather type to the next causes different aspects of the aliased-noise wavefield to appear unpredictable, while the properly sampled subsurface reflections retain continuity and coherency. This enhances amplitude and continuity discrimination of noise versus signal and facilitates suppression of the aliased noise and spurious amplitudes that cause grief for prestack migration. Compared to simple f-k filtering, the cascaded-FXY approach might require extra effort in processing, but has the distinct advantage of being one of the only techniques that is truly effective at attenuating aliased noise. This allows deployment of larger station intervals, saving millions of dollars in acquisition costs while simultaneously improving final image quality. So how do we integrate our 3D acquisition with data processing? High-quality subsurface imaging requires that the acquisition design provides broad and uniform ray coverage, properly sampling the subsurface targets irrespective of source-generated noise. Data processing must precondition the seismic for optimum prestack migration, which means attenuating spurious amplitudes, aliased noise, and discontinuities in both signal and noise wavefields. Prestack migration then has the dual purpose of imaging the subsurface structures as well as attenuating the remaining coherent noise. The lecture contains a number of case histories showing how the concept of sensible target sampling can be used to obtain high-quality images at low cost. This concept started in the early 1990s with sparse 3D surveys in Canada, modestly shot with only ten-fold, and culminates with a recently acquired ultradense 3D survey from the interior of Oman at a staggering 1500 fold. Despite the dramatic range of raypath density between these two extremes, the cost/km2 of the Canadian sparse surveys and the Oman dense surveys are nearly identical. So how exactly was this mean feat accomplished? Part of the answer lies in the implementation of a new simultaneous vibroseis source technique called “distance separated simultaneous sweeping” or DS3 for short. For the rest of the details, well, you’ll need to plan on attending one of the lectures, so you can hear about it first hand. See you there, Jack Bouska |