Thursday, October 1, 2009

Google Earth Application Maps Carbon's Course.

ScienceDaily (Sep. 30, 2009) — Sometimes a picture really is worth a thousand words, particularly when the picture is used to illustrate science. Technology is giving us better pictures every day, and one of them is helping a NASA-funded scientist and her team to explain the behavior of a greenhouse gas.
Google Earth -- the digital globe on which computer users can fly around the planet and zoom in on key features -- is attracting attention in scientific communities and aiding public communication about carbon dioxide. Recently Google held a contest to present scientific results using KML, a data format used by Google Earth.
"I tried to think of a complex data set that would have public relevance," said Tyler Erickson, a geospatial researcher at the Michigan Tech Research Institute in Ann Arbor.
He chose to work with data from NASA-funded researcher Anna Michalak of the University of Michigan, Ann Arbor, who develops complex computer models to trace carbon dioxide back in time to where it enters and leaves the atmosphere.
"The datasets have three spatial dimensions and a temporal dimension," Erickson said. "Because the data is constantly changing in time makes it particularly difficult to visualize and analyze."
A better understanding of the carbon cycle has implications for energy and environmental policy and carbon management. In June 2009, Michalak described this research at the NASA Earth System Science at 20 symposium in Washington, D.C.
A snapshot from Erickson's Google Earth application shows green tracks representing carbon dioxide in the lowest part of the atmosphere close to Earth's surface where vegetation and land processes can impact the carbon cycle. Red tracks indicate particles at higher altitudes that are immune from ground influences.
The application is designed to educate the public and even scientists about how carbon dioxide emissions can be traced. A network of 1,000-foot towers across the United States is equipped with instruments by NOAA to measure the carbon dioxide content of parcels of air at single locations.
The application is designed to educate the public and even scientists about how carbon dioxide emissions can be traced. A network of 1,000-foot towers across the United States, like the tower above, are equipped with instruments by NOAA to measure the carbon dioxide content of parcels of air at single locations.
But where did that gas come from and how did it change along its journey? To find out, scientists rely on a sleuthing technique called "inverse modeling" – measuring gas concentrations at a single geographic point and then using clues from weather and atmospheric models to deduce where it came from. The technique is complex and difficult to explain even to fellow scientists.
Michalak related the technique to cream in a cup of coffee. "Say someone gave you a cup of creamy coffee," Michalak said. "How do you know when that cream was added?" Just as cream is not necessarily mixed perfectly, neither is the carbon dioxide in the atmosphere. If you can see the streaks of cream (carbon dioxide) and understand how the coffee (atmosphere) was stirred (weather), then scientists can use those clues to retrace the time and location that the ingredient was added to the mix.
The visual result typically used by scientists is a static two-dimensional map of the location of the gas, as averaged over the course of a month. Most carbon scientists know how to interpret the 2D map, but visualizing the 3D changes for non-specialists has proved elusive. Erickson spent 70 hours programming the Google Earth application that makes it easy to navigate though time and watch gas particles snake their way toward the NOAA observation towers. For his work, Erickson was declared one of Google's winners in March 2009.
"Having this visual tool allows us to better explain the scientific process," Michalak said. "It's a much more human way of looking at the science."
The next step, Erickson said, is to adapt the application to fit the needs of the research community. Scientists could use the program to better visualize the output of complex atmospheric models and then improve those models so that they better represent reality.
"Encouraging more people to deliver data in an interactive format is a good trend," Erickson said. "It should help innovation in research by reducing barriers to sharing data."
Adapted from materials provided by
NASA.

San Andreas Affected By 2004 Sumatran Quake; Largest Quakes Can Weaken Fault Zones Worldwide.

SOURCE

ScienceDaily (Sep. 30, 2009) — U.S. seismologists have found evidence that the massive 2004 earthquake that triggered killer tsunamis throughout the Indian Ocean weakened at least a portion of California's famed San Andreas Fault. The results, which appear this week in the journal Nature, suggest that the Earth's largest earthquakes can weaken fault zones worldwide and may trigger periods of increased global seismic activity.
"An unusually high number of magnitude 8 earthquakes occurred worldwide in 2005 and 2006," said study co-author Fenglin Niu, associate professor of Earth science at Rice University. "There has been speculation that these were somehow triggered by the Sumatran-Andaman earthquake that occurred on Dec. 26, 2004, but this is the first direct evidence that the quake could change fault strength of a fault remotely."
Earthquakes are caused when a fault fails, either because of the buildup of stress or because of the weakening of the fault. The latter is more difficult to measure.
The magnitude 9 earthquake in 2004 occurred beneath the ocean west of Sumatra and was the second-largest quake ever measured by seismograph. The temblor spawned tsunamis as large as 100 feet that killed an estimated 230,000, mostly in Indonesia, Sri Lanka, India and Thailand.
In the new study, Niu and co-authors Taka'aki Taira and Paul Silver, both of the Carnegie Institution of Science in Washington, D.C., and Robert Nadeau of the University of California, Berkeley, examined more than 20 years of seismic records from Parkfield, Calif., which sits astride the San Andreas Fault.
The team zeroed in on a set of repeating microearthquakes that occurred near Parkfield over two decades. Each of these tiny quakes originated in almost exactly the same location. By closely comparing seismic readings from these quakes, the team was able to determine the "fault strength" -- the shear stress level required to cause the fault to slip -- at Parkfield between 1987 and 2008.
The team found fault strength changed markedly at three times during the 20-year period. The authors surmised that the 1992 Landers earthquake, a magnitude 7 quake north of Palm Springs, Calif. -- about 200 miles from Parkfield -- caused the first of these changes. The study found the Landers quake destabilized the fault near Parkfield, causing a series of magnitude 4 quakes and a notable "aseismic" event -- a movement of the fault that played out over several months -- in 1993.
The second change in fault strength occurred in conjunction with a magnitude 6 earthquake at Parkfield in September 2004. The team found another change at Parkfield later that year that could not be accounted for by the September quake alone. Eventually, they were able to narrow the onset of this third shift to a five-day window in late December during which the Sumatran quake occurred.
"The long-range influence of the 2004 Sumatran-Andaman earthquake on this patch of the San Andreas suggests that the quake may have affected other faults, bringing a significant fraction of them closer to failure," said Taira. "This hypothesis appears to be borne out by the unusually high number of large earthquakes that occurred in the three years after the Sumatran-Andaman quake."
The research was supported by the National Science Foundation, the Carnegie Institution of Washington, the University of California, Berkeley, and the U.S. Geological Survey.
Adapted from materials provided by
Rice University.