We would be happy to work with you to design and develop special lectures which best fit you and/or the needs of your organization.
Additionally, Imode Education is available to give lectures ranging from basic to advanced levels and even public lectures to offer expertise to a wide range of outlets based on need. For more information please email email@example.com
Arctic, Environmental challenges, Diagenesis, Gas hydrates, Gulf of Mexico, Niger Delta, The northwest Australian shelf, Porous rock, Permeable rock, Bayesian inference, Earth’s magnetic field, Earth’s gravitational field, Direct-firing (direct combustion), Co-firing, Gasification, Pyrolysis, Anaerobic digestion. Irradiance, Photoelectric Effect, Proxy data, Coal-bed methane, Igneous rocks, Metamorphic rocks, Carbonate rocks, A volcanic island arc, Pacific Ring of Fire, Mantle Convection, The hot-spot theory, Paleoseismicity, Troughs, Seamount chains, Polar Wander, Sea-floor Spreading, Bathymetry, Aerosols, Coral Reefs, Strike-slip fault, Thrust fault. Normal fault, Hypocenter, Epicenter, Holocene, Blackbody spectrum, Thermohaline circulation, Gulf Stream, Biodiversity, Anthropogenic, SEvaporation, Sublimation, Transpiration, Evapotranspiration, Cryosphere, Troposphere, Ductile and brittle behaviors, Ray tracing, Eikonal equation, Moments, Cumulants, Negentropy, Aliased data, Stiffness tensor, Compliance tensor, Bottom-simulating reflectors (BSR), Clay, Sand, Silt, Metamaterials, …
A View of Earth (synopsis):
We generally divide the Earth in four spheres: (i) Atmosphere, (ii) Hydrosphere, (iii) Biosphere, and (iv) Lithosphere. Atmosphere is a thin gaseous envelope surrounding solid Earth, Hydrosphere is the water, dominated by the oceans, Biosphere represents all living things on the planet, Lithosphere is the rocky outer shell. All these spheres are interconnected, with biosphere at the center. Solar energy is essential to the function of each of these spheres, and for the entire system. Note that billions of years ago, in the early atmosphere, oxygen was not present, and therefore, there was no life as we know it today. The photosynthesis of plants put there.
Water covers 71% of the Earth’s surface. Therefore you are more likely to found minerals that you may be looking for below the ocean than in land. Saltwater accounts for 97.5% of this amount. Fresh water accounts for only 2.5%. Yes, the amount of fresh water on Earth is small.
Our planet has a very nice temperature range that allows water to remain in a liquid state. If we were a colder place like Pluto, all of the water would be permanently frozen and solid. On the other hand, if we were on a very hot planet, all of the water would be in a gas state. Water vapor and solid water are relatively useless to the organisms of Earth. Two phenomena are involved in water cycle: evaporation, from oceans and land to atmosphere, and precipitation, from atmosphere to land and oceans.
Yes, even deep under the terrestrial ground and the seafloor, life is very abundant, albeit mainly in the form of microorganisms which are not visible to the naked eye. Jule Verne’s famous book A Journey to the Centre of the Earth reports how Professor Lidenbrock, his nephew Axel and their guide Hans meet a versatile realm of living organisms on their descend into the interior of our planet. When the novel was first published in 1864, the idea of life existing in the deep subsurface remote from sunlight and air was genuine science fiction. But towards the end of the 20th century, when scientific drilling opened the window into hundreds of meter deep environments, it became clear that Jules Verne in a sense was right.
How can scientists, more precisely geophysicists and geochemists, find out what’s happening deep inside the Earth? The temperatures are too high, the pressures are extreme, and the distances are too vast for drilling. How did geophysicists figure out that the mantle is solid, the outer core is liquid, and the inner core is solid?
Seismic waves travel at different speeds in different rocks. P-waves travel through solid, liquid and gas, while S-waves travel only through solids. P-waves travel faster than S-waves. P-waves and S-waves are also known as body waves
Drillings, including drillings below the ocean floor, also play important roles in our understanding of the Earth’s interior. The Integrated Ocean Drilling Program (IODP) and the Kola Superdeep Borehole (KSDB) are two examples of drilling programs with the primary goal of understanding of the Earth’s interior..
A scientific drilling project to drill into the Earth’s crust with a goal of reaching 15 km was carried out in the former USSR. It began digging in 1970. In 1992, they reached a final depth of 12 km with a temperature of 245 degrees Celsius, a very high temperature compared the temperature at the surface of the Earth. …
Climate Change (synopsis):
“Climate is what we expect; weather is what we get.” Mark Twain (1897). A single weather event, or even a spell of unusual weather, may be unprecedented and still well within the bounds of “normal” climate variability. Long-term weather patterns characterize the climate.
Changes in Earth’s climate can have huge effects on our lives and even on our survival. Therefore it is important to understand future climate change and prepare for the potential effects of future climates.
With the media coverage of climate change in the 2000s and 2010s, people can mistakenly believe that climate changes are all caused by human actions. Actually, until the advent of the industrial era, which began in the late 1700s, all past climate changes were natural changes. Climate change attributed to human actions is a new phenomenon.
Within the past three million years or so, climate has swung between mild states, like today’s, and those lasting from 10,000 to 20,000 years; and periods of 100,000 years or so, in which giant ice sheets, in some places several miles thick, covered the northern continents. Moreover, climate changes between cycles are often sudden, especially as the climate recovers from glacial eras. Fifty million years ago, the earth was free of ice, and giant trees grew on islands near the North Pole, where the annual mean temperature was about 15 degrees Celsius (60 degrees Fahrenheit), far warmer than today’s mean of about -1 degree Celsius.
Also climate change has something to do with recent extreme weather events, such as (i) the 2010 Pakistani floods, (ii) hurricane Katrina, which took place during the 2005 Atlantic hurricane season, (iii) the European heat wave in the summers of 2003 and 2010, and (iv) Hurricane Mitch in 1998 (Central America, including Honduras and Nicaragua).
What Cause Climate Change? There is a balance between in (i) incoming radiation from the sun, (ii) reflected radiation by the Earth, an (iii) radiation emitted by the Earth. Any change in one of the components of this system will cause climate change. Note that solar energy is a fundamental component of energy for the Earth’s climate system.
The solid Earth is a good example of blackbody. It can oscillate at any frequency. The sun is also another good example of blackbody. Because of the importance of these two blackbodies in climate system, we found it useful to familiarize ourselves with physical properties of blackbodies. Actually, the blackbody spectrum is a cornerstone in the study of quantum mechanics. It is the analysis of blackbody spectra that led to the discovery of the field of quantum mechanics.
Without the atmosphere, the Earth average temperature will be -18 degree Celsius (that is, 255 K) is quite low. The actual average temperature of the Earth is 14.5 degrees Celsius. With the atmosphere and without the greenhouse effect, the ground temperature will be 26.85 degree Celsius. This time it is too warm. The earth’s surface now receives not only the net solar radiation, but infrared from the atmosphere as well. “Greenhouse effect” is need. A more intuitive way to understand the greenhouse effect is to pay attention to desert weather. In the desert, there are no clouds, and little water vapor into the atmosphere. During the day, the climate is quite hot. At the night, temperatures plunge.
How Do we Know that A Gas is a Greenhouse Gas? (i) Diatomic molecules (N2, O2, and more) and monatomic molecules don’t interact with long-waves. They are not greenhouse gases. (ii) Greenhouse gases are all polyatomic: H2O, CO2, CH4, O3, N2O. They include vibration mode is the transition that absorbs the infrared. The absorption tends to be at a single wavelength.
Basically, with appropriate values of greenhouse emissivity, we can reconstruct average temperatures that are quite close to actual Earth average temperatures.
The problem with accurately assessing climate change is that historical records of most meteorological variables go back, at best, only 160 years or so (or to about 1850). The length of this record is just too small to describe the range of natural climate variability over thousand- and million-year periods. Yet to understand fully the behavior of the atmosphere and to anticipate future climate change, we must somehow discover how climate has changed over broad expanses of time.
What causes climate change? Humans started emitting greenhouse gases only after the advent of the industrial era, which began in the late 1700s. Given that there were fairly large temperature fluctuations well before the 1700s, what are the reasons for these climate changes?
Earthquakes and Volcanoes (synopsis):
Earth is the place where we live. It provides food through the farming of its soils. It provides energy resources (e.g., oil, gas, and coal) and minerals (e.g., gold, diamonds, uranium, and thorium). Yes, Earth also kills, even more frequently these days; earthquakes and volcanic eruptions are the major sources.
Here are some thoughts that we may have as we move around the world for studies, for businesses, and even for leisure. (i) I am going to work in Indonesia. What is the risk that I will be killed in a volcanic eruption? (ii) I would like to attend graduate school in California. (iii) What is the chance of being killed in an earthquake?
I would like to study abroad. My options are Greece and Italy. Greece has a lot of earthquakes, so is it wise to select Greece? However, my campus in Italy is very close to the very active Mt. Etna. Our objective in these lectures is to show that significant progress has been made and that there is still a significant way to go on a number of issues.
An earthquake occurs when the pressure built up along a fault becomes stronger than the pressure holding the rocks together. Then the rocks on either side of the fault suddenly rip apart, sometimes at supersonic speeds. The two sides of the fault slide past one another, releasing the pent-up pressure. Energy from this separation radiates outward in all directions, including towards the surface, where it is felt as an earthquake.
An earthquake’s size, or magnitude, depends on how large its parent fault is and how much it has slipped. Because these faults extend from the surface down to several miles deep, geophysicists cannot simply visit the source to calculate these numbers. Instead, they rely on again on sensors located around the Earth’s surface, which measures seismic waves, or vibrations, from an earthquake.From recordings of earthquake-generated waves, we want to get as much information possible about the earthquake source, including (i) hypocenter (latitude, longitude, depth), (ii) origin time (start time of earthquake), (iii) magnitude (size of earthquake), and (iv) focal mechanism (faulting type and size).
A volcano is any opening in Earth’s crust through which magma has reached Earth’s surface. Vents are openings at the surface of Earth through which volcanic material passes. Most volcanoes have a crater (a roughly circular depression usually located near the top of the volcano). Mountains and hills without vents are not volcanoes.
The 1815 Eruption of Mount Tambora, in Indonesia, was one of the most powerful eruptions in recorded history. The eruption resulted in a brief period of significant climate change that consistently led to various cases of extreme weather. Several climate forcings coincided and interacted in a systematic manner that has not been observed since, despite other large eruptions that have occurred since the early Stone Age. Although the link between the post-eruption climate changes and the Tambora event have been established by various scientists, the understanding of the processes involved are incomplete.
A huge cloud of volcanic ash and gas rises above Mount Pinatubo, Philippines, on June 12, 1991. Three days later, the volcano exploded in the second-largest volcanic eruption on Earth in this century. Before the volcanic activities of 1991, its eruptive history was unknown to most people. It was covered with dense forest which supported a population of several thousand indigenous people. The effects of the eruption were felt worldwide. It ejected millions of tons of SO2 and other particles into the atmosphere, more than any eruption since Krakatoa in 1883. Global temperatures dropped by about 0.5°C (or 0.9°F) in the years 1991-93, and ozone depletion temporarily increased substantially.
Earthquake epicenters are not incoherently distributed on Earth. They are much more common beside trenches. Similarly, volcanic eruptions are also not incoherently distributed on Earth. They are much more common beside trenches just like earthquakes. There are also significant correlations between earthquake occurrences and volcanic eruptions. The whole of Central America is under the twin threat of earthquakes and volcanic eruptions. An earthquake in Guatemala in 1976 caused 22,000 deaths, in Nicaragua in 1972 an earthquake led to 5,000 deaths, and an earthquake in El Salvador in 1986 caused 10,000 deaths. Volcanic eruptions occurred in Irazu in 1964 and Arenal in 1970, both in Costa Rica.
There are also some exceptions; that is, cases where there are no correlations between earthquake occurrences and volcanic eruptions. On July 26, 1963, a 6.1 magnitude earthquake occurred in Skopje (skop-yay) (in the present-day Republic of Macedonia, then part of Yugoslavia), killing over 1,070 people. About 80 percent of the city was destroyed. Yet there are no active volcanoes in the region.
Before the 1950’s, it was assumed that the geology below oceans was just like the geology of continents; it was believed that oceans were just continents submerged below water. This is totally Wrong! Ocean topography is very different from continental topography: the largest mountain range (or aerial extent) under the ocean, and exactly midway between the continents,
very deep oceanic trenches (~ 10 km below sea level) are always associated with volcanic arcs, continental arcs, or oceanic island arcs, oceanic islands, seamounts, and atolls.
Remember the expression “that is only theory.” Usually it means the answer or observation is of limited use or limited interest. No, that is not the case in sciences. In sciences, a theory is a well-tested and widely accepted view that scientists agree best explain the observations. A theory becomes a paradigm because it can explain a large number of interrelated aspects. Tectonic plate theory is a paradigm. The paper of three seismologists of Lamont (Columbia University), which were Brian Issacks, Jack Oliver, and Lynn Sykes was published in the Journal of Geophysical Research in 1968 was a significant turning point in the acceptance of the plate tectonic theory by the scientific community. Basically, they organized their seismic observations into coherent tables which incontrovertibly showed the partition of the globe into moving tectonic plates. Note that the tectonic map does not coincide with the geographic map.
The creation, expansion, and subduction of plates described here take place on a geological time scale. It took 100 million years to create the sea floor of the Atlantic ocean. The speed of creation, expansion, and subduction of plates is measured in centimeters per year. As we can see, this is not a speed to directly endanger humans. Yet, the subduction in the mantle concerns only oceans. The continents do not subduct in the mantle.
The plate boundary activities that we have just described do not account for seamount chains such as those of Hawaii, the Azores, the Canaries, Reunion, and Tahiti. None of these locations are at the plate boundaries. Tectonic plate theory uses a complementary theory, the hot-spot theory, to explain seamount chains.
There is a large network of fractures and faults of various sizes which cross the earth’s crust. This network is central to the occurrence of potential natural and induced earthquakes. Under the right conditions, any of these faults or a network of fractures can lead to earthquakes through slippage. Can we map or image this network? What is the connectivity of the elements of this network in terms of tectonic stresses? The answers to these questions will very likely bring us closer to the short-term prediction of the earthquake.
Energy Resources (synopsis):
The Scientific and Technological Challenge of the Twenty-first Century
Energy may be the most important scientific and technological challenge of the twenty-first century. Energy is essential to modern life. Without it, many billions of people would be left cold and hungry, and long-distance communications would be extremely difficult. In most places, economic growth and energy demand are closely correlated. The availability and use of energy, especially electricity, are sometimes considered a direct measure of the economic well-being of a society.
As we discussed in previous classes the Earth can be divided into four spheres. These spheres are: (i) Atmosphere, (ii) Hydrosphere, (iii) Biosphere, and (iv) Lithosphere. The atmosphere is associated with wind and solar energy. The Hydrosphere is associated with hydroelectricity. The Biosphere is associated with biomass. The Lithosphere [oil, gas, including gas hydrate
(oil and gas in solid forms), coal, uranium (nuclear), geothermal, etc.]
Proven reserves of uranium have peaked and are going down rapidly. Current nuclear plants consume around 1.75 % of estimated high-grade uranium reserves per year. At this rate, the present resources would last 57 years.
Biomass energy includes charcoal, firewood, agricultural residue, temperate crop waste, tropical crop waste, animal waste, municipal solid waste, and commercial and industrial waste.
We have enough energy capacities for decades as far as oil, coal, and natural gas. However, the environmental issues associated with the conversion of these primary resources to secondary resources are major drawbacks. Nuclear energy also has enough energy capacities for decades but the potential catastrophic effects in the case of an accident on the biosphere are major drawbacks. Renewables have very limited environmental impact but the capacities are just too small with respect to the world electricity demand and extremely small with respect to world demand of fuels for transportation. Among renewables, only biomass can provide fuels for transportation.
“I believe that water will one day be employed as fuel, that hydrogen and oxygen which constitute it, used singly or together, will furnish an inexhaustible source of heat and light, of an intensity of which coal is not capable. I believe then that when the deposits of coal are exhausted, we shall heat and warm ourselves with water. Water will be the coal of the future.”
Jules Vernes (1870).
Notice how small are the marginals between production and consumption, yet these marginals have large effects on the oil price. Beside the US (thanks to shale gas production), petroleum production in the other countries has stagnated or is in decline.
The Middle East has 75% of the world’s remaining conventional oil. Ghawar field (Saudi Arabia) had 100 billion barrels of reserves; only one field of this size has been discovered. Ghawar was discovered in 1961 and reached its peak production in 1980 with about 6 million barrels per day. Today Ghawar’s production is below 5 million barrels per day, despite the large number of wells drilled (about 3400), and the seven million barrels of water pumped in the reservoir daily to improve recovery. Note that our definitions of supergiant and giant oilfields are based on recoverable reserves and not on estimated reserves.
Super-Giant: Cantarell (Mexico) was discovered in 1976. The second-largest producing field in the world is the Cantarell complex in Mexico. It lies 85 km from Ciudad del Carmen. The field was discovered in 1976 and began production in 1979. Cantarell was the last oilfield found anywhere whose daily production exceeded two million barrels per day as late as 2004. In 2009, Cantarell’s oil output fell to 700,000 barrels per day. All the major conventional oilfields were discovered long ago, and many have exceeded peak production and are in decline. Spare capacity is about 0.5 million barrels per day on average. The discovery of giant oil fields (i.e., > 0.5 billion barrels) has decreased to near zero. Small discoveries are occurring, but not at a sufficient rate to offset the production decline of giant fields. Global demand per day is expected to increase by at least 1 million barrels each year on a 10 years average.
Sedimentary rocks are formed from the breakdown of pre-existing rocks: igneous, metamorphic, and sedimentary. They are formed by both the direct deposition of fragments of pre-existing rock and the precipitation of their solutes. We’ll come back to the definition of deposition and precipitations. Sedimentary rocks tend to be deposited with voids between sediment particles. Thus most petroleum reservoirs are found in the coarser sediments. Fine-grained sediments serve as impermeable seals to petroleum migration and also as petroleum source beds. (so in petroleum industry, you will hear more about sedimentary rocks than any other rocks). Sediments are generally classified by grain size (clay, claystone; silt, siltstone; sand, sandstone; gravel, conglomerate). They are also classified by their chemistry: shale, carbonate, sandstone, and evaporites. Shale, sandstone, and carbonates make up about 99 percent of all the sedimentary rocks with shale being the most abundant and carbonate the least abundant. Yet the oil amount in carbonate petroleum reservoirs is almost equivalent to the oil amount in sandstone petroleum reservoirs. (keep this figure in mind; we will come back. It tells us about unconventional oil and gas related to shale).
The rocks from which the generation of oil and gas takes place are known in petroleum geology as source rocks. Source rocks are generally sedimentary rocks, typically shale or limestone, that are rich in organic matter. After we have defined all the components of large oil accumulation the notion of source rocks will be clear. The oil and gas formed deep in the subsurface tend to rise toward the earth’s surface because they are less dense than water and most surrounding rocks. Their upward movement is known as migration.
A commercial petroleum prospect, several requirements must be met. There must be (1) a petroleum source rock, (2) a reservoir rock, and (3) a seal rock. Petroleum migrates from the source rock to the porous and permeable reservoir. The petroleum is trapped in the reservoir by the seal rock. Seismic data acquisition provides us data that we can use to image the subsurface including petroleum traps and seal rocks.
Through billions of years of biogenic and thermogenic processes, the earth has naturally produced large accumulations of oil, such as Ghawar, Samotlor, Prudhoe Bay, and Canterell. Through exploration, all the supergiant and even giant fields were discovered in the 1940s, 1950s, 1960s, and 1970s. No new giant fields will be found for a while, irrespective of technological advances in oil exploration. We will have to wait a couple of million years to discover new giant fields. We are not finding new oilfields fast enough. We are not improving the recovery factor in existing fields fast enough. Too many fields are old and declining. Society’s demand for oil has not stopped increasing. In only 100 years mankind used more than half of all the oil on the planet, oil that took billions of years to produce and that is the result of climatic conditions that existed at only one time in the earth’s 4.0 billion-year history. Oil is a nonrenewable resource.
At the end of the day, we have to drill holes in the ground in order to definitely determine that there is a significant amount of oil in the desired location, and to produce the discovered oil. Aramco redrilled Ghawar oilfied with horizontal wells to increase production. Directional drilling allows us to drill into the reservoir where vertical access is difficult or not possible – for instance, an oilfield under a town. Actually, accusations of directional drilling led to the first Gulf War in 1991. Iraq accused Kuwait of using directional drilling techniques to extract oil from Iraqi oil reservoirs near the Kuwaiti border. Iraq subsequently invaded Kuwait. After the war, the border between Iraq and Kuwait was redrawn, with the reservoirs awarded to Kuwait.
The reservoirs in ultra-deep waters have low permeability and high viscosity and are located below salt bodies. In other words, ultra-deep-water reservoirs are very different from reservoirs currently in production. We have to drill through the salts to 5,000 m or more below the sea floor. The ultra-deep-water drilling rigs’ day rates today well exceed $2 million US, a significant cost that considerably reduces the number of industry players in ultra-deep waters to a handful. Furthermore, environmental issues (weather patterns, hurricane severity, near-sea-bottom currents, etc.) and safety challenges are equally important.
Lessons learned in drilling below 5,000 m depth and in regard to environmental issues associated with ultradeep waters will be useful there. Note that USGS estimates recoverable reserves to be 90 billion barrels of recoverable oil for the entire offshore Arctic. In the portion of the Arctic OCS that the U.S. controls, the Chuckchi and Beaufort seas of the north slope of Alaska, the estimate is 9 billion barrels of recoverable oil. The other challenges of Arctic exploration and production include moving sea ice, ultraremoteness, oil spill risks on broken ice, etc. It is safe to assume that many of the wells drilled in the Arctic will be as deep as in ultradeep waters, surpassing 5,000 m in total depth.
Elastic Wave Propagation (synopsis):
There is nothing more important in the education of geophysicists than developing their understanding and intuition about how seismic waves propagate in the ground. Moreover the wave propagation is a topic of broad appeal in sciences, with broad applications, including classical mechanics, aerodynamics, meteorology, earthquake seismology, medical imaging, defense, and more.
We start this series of lectures by showing animations of wave propagation. Our objective for showing these animations is to first build your intuitions for the physical phenomena that allow us to understand the subsurface, and to find hydrocarbon trapped in the subsurface before we immerse into the details of the physics and mathematics of these phenomena. These phenomena are:
(i) P- and S-waves, and how we generate these waves. (ii) Reflection/transmission/refraction, which capture most information about the subsurface along with the diffractions. (iii) Free surface, which is associated with surface waves and multiple reverberations in the water column. These effects complicate the analysis of wave propagation responses. (iv) Anisotropy which can provide us additional characterization of rock formations.
So we will help you understand the notion of P and S waves, the notion of reflection and transmissions, refractions, the notion of free surface, diffraction and anisotropy. If you understand these five concepts and learn to use it in conjunction, then you will be able to explain every complex phenomena going on in your seismic data. Again, we have five phenomena and the complex cases are just mixtures of these five concepts.
We do not have access to snapshots of wave propagation in the earth; we are limited to data from sensors on, or just below, the surface of the earth, or in boreholes. So we have to make sure that sensors effectively measure the desired physical quantities and that they are adequately distributed. Another objective of these lecture is to describe the current and emerging distributions of source and receivers (that is, seismic-acquisition geometries) in seismic experiments.
Here come the continuous medium assumptions. We will disregard the atom scale (microscopic scale) of rock formations and envision them without gaps or empty spaces. Furthermore, we will assume that mathematical functions (forces, stress, displacement, and strain) which enter into wave propagation theory are continuous as well as the derivatives of these functions, if they enter. There is one exception to the continuous medium assumption: the physical properties of rock formations. These properties can contain a finite number of surfaces separating regions of continuity. Rock formations can consist of piecewise-continuous regions, separated by interfaces, where the medium parameters are discontinuous. So the assumption of a continuous medium permits us to define stress, displacement, and strain at the particle scale (macroscopic scale, particle) instead of at the microscopic scale (or atomic scale). It permit us to use the laws of continuous mechanics to study seismic wave propagation and seismic data. In other words, Seismology is based on laws of continuous mechanics. Seismology does not use laws of quantum mechanics.
More and more, we have to work with measurements at a scale much smaller than the seismic scale. Therefore a significant aspect of the microscopic scale has to be taken into account in macro models while maintaining the use of laws of continuum mechanics. For this reason, we will consider anisotropic and even anelastic models of the subsurface.
Here is the question we are addressing here: (i) How do we describe the forces which restore the particle to resting state? (ii) How are particle displacement and deformation related to the medium properties? (iii) How do we describe forces which cause wave propagation in the first place? (iv) What happens when waves encounter obstructions like these interfaces is the foundation of seismic imaging.
The information contained in seismic data consists of arrival times, and amplitude of reflected waves. We will conclude these series of lectures on elastic wave propagation by describing equations of amplitude variations with angles and offsets, and equations of traveltime arrivals of waves at given sensor locations. The amplitude of reflected waves tell us about the contrast of physical properties which cause the reflection. Zoeppritz equations for a horizontal interface is one basic way of developing insights of this problem. Note that these traveltime equations are presented here because they provide an insight into the relationship between recorded data and the locations in the subsurface which cause reflections, refractions and diffractions. Be prepared to see more mathematics in this lecture than in the previous ones. However, these are basic mathematics and are well illustrated.
Acquisition Geometries and Data (synopsis):
The concept of homogeneity, heterogeneity, acousticity, and elasticity controls the type of waves that we can generate and the physical quantities that we can record. Therefore their implications for seismic acquisitions are profound. For instance, the division of seismic experiments into land and marine is primarily related to the fact that (1) in marine cases the acquisition is conducted in water, which can be treated as acoustic and homogeneous, whereas (2) in land cases, the acquisition is conducted in heterogeneous elastic media.
Actually, the differences between the various acquisition scenarios presented here come down to the difference between generating and recording waves in a homogeneous fluid and in a heterogeneous elastic medium (solid). The fluid is generally considered homogeneous, with a relatively flat air-water interface. It supports only P-waves, and both pressure variations and particle velocity can be recorded. The solid is generally considered a heterogeneous elastic medium with a nonflat air-solid interface at the earth’s surface or a nonflat water-solid interface at the sea floor. It supports P- and S-waves, and the three components of particle velocity can be recorded. On the sea floor, pressure can be recorded in addition to the particle velocity.
The airgun is the most commonly used source mechanical device in marine surveys. It includes a chamber of compressed air and releases a high-pressure bubble of air into the surrounding water as a source of energy to generate P-waves. Because the pressure inside the bubble greatly exceeds the hydrostatic (external) pressure, the pressure of bubbles causes variations in the pressure in the water as a function of time. These variations in the water pressure represent the airgun signature.
The events in seismics can be grouped into six categories: direct waves, primaries, source ghosts, receiver ghosts, free-surface multiples, and internal multiples. Let us elaborate on each of these categories. Note that most present seismic imaging schemes assume that data contain primaries only; in other words, they require that direct, ghost, and multiple events must be removed from the data before imaging. Due to interferences between primaries and other events contained in the seismic data, the problem of removing multiples is one of the most challenging steps in seismic data processing, as we will see in 300-level lectures.
Source ghosts, say, of primaries, are almost indistinguishable from the primaries themselves when the sources are very close to the sea surface as they are in most marine experiments today. In conventional data processing, source ghosts are generally treated as a component of the source signature because they are indistinguishable from events associated with them.
Although seismic data are naturally recorded in the form of shot gathers, they are sometimes reorganized in other domains in which processing may be physically more intuitive or for which some of the important features in seismic data reveal themselves more clearly than in shot gathers. The common midpoint gathers and common offset gathers constitute an example of such a domain. The distance between the source and receiver for a given shot gather is called the offset.
A near-offset is the distance between the source and the nearest receiver to the source for a given shot gather, whereas a far-offset is the distance between the source and the farthest receiver to the source for a given shot gather.
Noise in seismic data can be defined as a signal that our mathematical model cannot account for. In marine acquisition, noise can be created by side effects related to the seismic source detonation or can emanate from sources other than the seismic survey, for example, electrical power lines, ship props, drilling, other seismic boats, and wind/rough seas. Rough seas are the most important source of noise in marine acquisition. This noise is called swell noise. It corresponds to ocean swells during rough weather. The level of swell noise is actually the principal factor determining the acceptability of marine seismic data in rough weather.
In seismology broadband is a characteristic of data with a wider band of frequencies compared to data associated with conventional seismic exploration and production. In the marine towed-streamer, the data have a usable bandwidth, typically from around 8 Hz to 80 Hz, whereas broadband seismic systems can produce data with usable frequencies from, as low as 2 Hz to as high as 250 Hz. The additional low and high frequencies contained in the broadband seismics allow us to produce images of the subsurface which readily reveal depositional models, something that is not generally possible with conventional seismics. Two of the major factors limiting the data spectra to between 8 and 80 Hz in conventional seismic systems are (1) the ghost effect and (2) swell noise. The other major factor is that propagating waves attenuate with time due to various energy-loss mechanisms that we will discuss in future lectures. The broadband approach here consists of designing the acquisition systems so that we can record data without ghosts and swell noise or in such a way that the removal of swell noise and ghosts become feasible at, preferably, the early stage of the seismic-data-processing flow.
The towed-streamer experiment records P-waves, but no S-waves are directly recorded, although the wavepath below the sea floor may include some S-wave paths. The S-waves are not directly recorded because the receivers are in seawater, and water, like all nonviscous fluids, supports only P-waves, not S-waves. In a marine ocean-bottom seismic ( or OBS) experiment, which is also known simply as a marine 4C experiment, the receivers are located at the sea floor. We directly record S-wave arrivals along with P-wave arrivals. Every receiver station is a four-component (4C) sensing system: three components of the particle velocity field are recorded from a three component geophone, and the pressure field is recorded from a hydrophone.
In marine seismics, the layer of water overlaying the geology controls the wave types that we can generate and the physical quantities that we can record. This water layer is also responsible for most coherent noise in marine seismic data; namely, swell noise, multiples, and ghosts. On land, the low-velocity layer (LVL) plays a similar role in different ways. The weathering of surface rocks and the laying down of soft sediments over the years causes a layer of semi-consolidated surface rocks which overlies the sedimentary section to be explored. This layer of semi-consolidated surface rocks is know as the weathering layer or low-velocity layer (LVL). The term LVL is used because of the low velocities of propagation of P-waves and S-waves through this layer. Energy trapped in the LVL is responsible for most of the challenges associated with land seismic acquisition and processing. Land seismic acquisition is designed to try to reduce energy trap in the low velocity layer as much as possible.
A transition zone (also called a mixed-terrain zone) is a region where environments change rapidly, from land to the near-onshore coast and vice versa. Because ships are limited by the water depth in which they can safely be used to conduct operations, and because land operations must terminate when the source approaches the water’s edge, transition zone recording techniques must be employed if a continuous seismic profile is required over the land and then into the sea. As we can expect, different coastlines require different equipment: One has to be imaginative and work on a case-by-case basis; there are no standard acquisition geometries in transition zones yet.
Borehole seismic surveys, also known as vertical seismic profiles (or VSPs), are acquired with the source on the surface and receivers at known depths in the borehole. The key difference between borehole seismics and surface seismics like towed-streamer, OBS, land surface, or even the vertical cable, which we will discuss later, is that surface seismics covers a large area on the order of a couple of hundred square kilometers, whereas borehole or well seismics cover just the vicinity of the borehole, but with higher resolution. High resolution here, means that we can see more geological details from borehole seismics than from surface seismics. In other words, we can easily identify the structure in the order of five meters or smaller in borehole seismics, but rarely in surface seismics. One of the common functions of borehole seismics is to help find the precise location of the well in the 3D image of the subsurface derived from surface seismic data.
Electromagnetic waves, Electrostatics, and Magneto-statics (synopsis):
Electromagnetic (EM) radiations are all around us and present in the entire universe. Yet, EM radiations are harder to get a handle on, although we see with light, which is an EM radiation, we can appreciate the beauty of a Fiji coral reef in blue waters, we can appreciate the warm radiance of sunshine and the sting of sunburn. These observations are brought to us by electromagnetic waves. Let me emphasize that a field cannot be seen, but nonetheless it is real and can do work (by applying forces) on objects and exchange energy. It took scientist along time to accept field in 1890’s.
We are now well familiar with the fact that the X-ray can reveal a broken bone and with microwave popcorn (and the so-called “microwave generation” which is a new human generation with the tendency to overuse microwave cookers to get their food). Again these are manifestations of all EM waves around us. In summary, the electromagnetic force is essentially responsible for almost all physical phenomena encountered in day-to-day experience, with the exception of gravity. Friction, electricity, electric motors, permanent magnets, electromagnets, lightning, electromagnetic radiation (radiowaves, microwaves, X-rays, etc, as well as visible light), are part of electromagnetism.
What are electromagnetic waves? How are they created, and how do they travel? How can we understand and organize their widely varying properties? What is their relationship to electric and magnetic effects? The answers to these questions are fundamental to our discussion of the solar and wind energy in the 300 level classes. Actually the answers of these questions are fundamental to any engineering and science studies, irrespective of discipline. It does not make sense to call yourself engineers or scientists if you are not familiar with the basic notions of electromagnetic waves
It is useful to note that Electromagnetics and Electromagnetism are generally considered synonyms. The theory of electromagnetism includes the definition of several fundamental concepts among which: electric charge, electric current, electric and magnetic fields are central. Also, it contains the study of forces acting upon electric charge carriers in motion, laws of energy of electromagnetic fields.
A vacuum (also known as free space) is a space entirely devoid of matter. Outer space (a region with a gaseous pressure much less than atmospheric pressure) is considered a good approximation of a vacuum because it has less matter in it than anything mankind can reproduce, but it still has some atoms bouncing around. In a vacuum, electromagnetic radiation travels unobstructed, at the speed of light which is constant and invariant with direction. In other words, a vacuum is a homogeneous isotropic medium. Unlike all the elastic waves, electromagnetic waves don’t need a medium to propagate in. They work just fine in a vacuum. Warming occurs when radiation or light from the sun is absorbed by the Earth and its atmosphere and is then changed into heat energy. We see light (i.e. visible EM waves) from stars and galaxies. EM forces generated there move electrons through space (or a vacuum) on the Earth. So we need to understand wave propagation through a vacuum to describe and characterize light, and to study climate and even weather. Also, we need to understand wave propagation through a vacuum to effectively harness solar energy; solar cells convert sunlight into electrical energy.
The moment we talk about long distances, we should start talking about waves traveling long distances without lines (wireless) through space (vacuum). As the name suggests, wireless systems operate via transmission through free space rather than through a wired connection. So I can sit in the basement and still talk on my mobile phone. Users of mobile phones can make and receive calls almost anywhere, including while in motion.
When dealing with electromagnetics in matter, special attention must be paid to microscopic scale (or atomic scale) and macroscopic scale at which experimental measurements are made as we see in the next lecture. This distinction is not important in the vacuum because we are dealing with a homogeneous (the same properties at all points) and isotropic medium (the same properties along any direction).
The basic laws of classical electromagnetic theory were formulated by James Clerk Maxwell in 1873. The physical quantities that describe the EM waves motion depend on position and on time. Maxwell’s equations are not just a theory; they are the base of your cell phones. By the way, your cell phones work OK. Maxwell’s equations govern all of (classical) electricity and magnetism. We here describe Maxwell’s equations in a vacuum. Note that electricity and magnetism were considered as separate (and mysterious) phenomena (until Maxwell).
Matter are made of atoms (size of about 10-10 m). We can describe the Maxwell’s equations at the atomic scale by describing the fields entering in these equations at the atomic scale. This choice implies that Maxwell’s equations consider all matter to be made of charged and uncharged particles. Maxwell’s equations which deal with microscopic fields associated with discrete charges are generally known as microscopic Maxwell’s equations. For many problems it is not convenient to deal with point charges. If the point charges we have been discussing are, say, electrons, then a macroscopic object will consist of an absolutely enormous number of electrons, each with a very tiny charge. The macroscopic Maxwell’s equations ignore many details on a fine scale that may not be important to understanding matters on a grosser scale (the scale of the human senses) by calculating fields that are averaged over some suitably sized volume.