News + Media
LA Times, Dean Kuipers
Several readers pointed out an omission in last week's post about the National Oceanic and Atmospheric Administration’s release of its Annual Greenhouse Gas Index, which showed that man-made gases that contribute to global warming continued a steady rise. The post -– and the AGGI –- mentioned carbon dioxide, methane, nitrous oxide and other gases, but failed to mention the biggest contributor to global warming: plain old water vapor.
“I want to comment that the way-dominant greenhouse gas in the atmosphere is not mentioned, namely water vapor,” writes Ken Saunders of Pacific Palisades. “Water vapor accounts for about 97 percent of the total (natural plus man-emitted) greenhouse warming of the planet. See, e.g., John Houghton's ‘The Physics of Atmospheres, 3rd edition,’ Cambridge University Press, 2002.”
This is true, water vapor is the major player in the greenhouse effect and is often omitted from reports and reporting about global warming -– mostly because it is more of a symptom than a cause in global climate change, and cannot be easily mitigated.
Tom Boden, director of the U.S. Energy Department’s Carbon Dioxide Information Analysis Center at Oak Ridge National Laboratory, acknowledges in an email: “Folks are right when they state water vapor is a powerful greenhouse gas and not routinely measured directly in the atmosphere. Atmospheric water vapor is difficult to measure, highly reactive, and variable in amount due to meteorological conditions (i.e., atmospheric water vapor is continuously being generated from evaporation and continuously removed by condensation).”
“Water vapor is the most important greenhouse gas and natural levels of [carbon dioxide, methane and nitrous oxide] are also crucial to creating a habitable planet,” writes John Reilly, professor at MIT and co-director of the Joint Program on the Science and Policy of Global Change, Center for Environmental Policy Research, in an email.
That idea leads many to believe that global warming is natural and cannot be affected much by human activity. Reader Roy W. Rising of Valley Village writes: “Today's report focuses on a bundle of gases that comprise a very small part of total of ‘greenhouse’ gases. It totally disregards the long-known fact that about 95% of all ‘greenhouse’ gases is WATER VAPOR! Spending billions of dollars to alter a few components of the 5% won't affect the natural course of climate change.”
Reilly warns, however, that scientists don’t blame water vapor or clouds for global warming.
“Concerns about global warming are about how human beings are altering the radiative balance,” says Reilly. “While some of the things we do change water vapor directly, they are insignificant. Increasing ghg's [greenhouse gases] through warming will increase water vapor and that is a big positive feedback [meaning: the more greenhouse gases, the more water vapor, the higher the temperature]. But the root cause are ghg's. So in talking about what is changing the climate, changes in water vapor are not a root cause.”
Click here to read the rest of the story reported by the LA Times.
There are many sources that can make a contribution to our energy supply, but likely not at a major scale in the near future.

Beyond wind and solar power, a variety of carbon-free sources of energy — notably biofuels, geothermal energy and advanced nuclear power — are seen as possible ways of meeting rising global demand.
But many of these may be difficult to scale up enough to make a major contribution, at least within the next couple of decades. And a full accounting of costs may show that some of these technologies are not realistic contributors toward reducing emissions — at least, not without new technological breakthroughs.
Biofuels have been an especially controversial and complex subject for analysts. Different studies have come to radically different conclusions, ranging from some suggesting the potential for significant reductions in greenhouse gas emissions to others showing a possible net increase in emissions through increased use of biofuels.
For example, a 2009 study from MIT’s Joint Program on the Science and Policy of Global Change found that a major global push to replace fossil fuels with biofuels, advocated by many as a way to counter greenhouse gas emissions and climate change, could actually have the opposite effect. Without strict regulations, that study found, the push to grow plants for biofuels could lead to the clearing of forestland. But forests effectively absorb carbon from the air, so the net effect of such clearing would be an increase in greenhouse gases entering the atmosphere, instead of a decrease.
Another recent MIT study, by researcher James Hileman of MIT’s Department of Aeronautics and Astronautics, found that replacing fossil fuels with biofuels for aviation could have either positive or negative effects — depending on which crops were used as feedstock, where these were located, and how the fuels were processed and transported.
Key to biofuel’s success is the development of some sort of agriculture that wouldn’t take away land otherwise used to grow food crops. There are at least two broad areas being studied: using microbes, perhaps biologically engineered ones, to break down plant material so biofuels can be produced from agricultural waste; or using microscopic organisms such as algae to convert sunlight directly into molecules that can be made into fuel. Both are active areas of research.
For the former, one problem is that traditional processes to break down cellulose use high temperatures. “You really want these conversions to go on at low temperature, otherwise you lose a lot of energy to heat up” the material, says Ron Prinn, the TEPCO Professor of Atmospheric Science and co-director of the MIT Joint Program on the Science and Policy of Global Change. But, he adds: “Given the ingenuity of bioengineers, these conversion problems will be solved.”
Tapping the Earth
Geothermal energy has huge theoretical potential: The Earth continuously puts out some 44 terawatts (trillions of watts) of heat, which is three times humanity’s current energy use.
The most promising technology for tapping geothermal energy for large-scale energy production is so-called hot dry rock technology (also called engineered geothermal), in which deep rock is fractured, and water is pumped down into a deep well, through the fractured rock, then back up an adjacent well after heating up. This heated water can then be used to generate steam to drive a turbine. A 2006 MIT study led by professor emeritus Jefferson Tester, now at Cornell University, found potential to generate 0.5 terawatts of electricity this way in the United States by 2050. And a new study by researchers at Southern Methodist University, released this week, found that just using presently available technology, there is a potential for 3 terawatts of geothermal electricity in the United States.
In principle, this power source could be tapped anywhere on Earth. As you drill deeper, the temperature rises steadily; by going deep enough it’s possible to reach temperatures sufficient to drive generating turbines. Some places have high temperatures much closer to the surface than others, meaning this energy could be harnessed more easily.
Using this method, “there are thousands of years’ worth of energy available,” says Professor of Physics Washington Taylor. “But you have to drill deeply,” which can be expensive using present-day drilling methods, he says.
“There’s a lot of energy there, but we don’t quite have the technology” to harness it cost-effectively, he says. Less-expensive ways of drilling deep into the Earth could help to make geothermal energy cost effective.
Advanced nuclear
Most analysts agree nuclear power provides substantial long-term potential for low-carbon power. But a broad interdisciplinary study published this year by the MIT Energy Initiative concluded that its near-term potential — that is, in the first half of this century — is limited. For the second half of the century, the study concluded, nuclear power’s role could be significant, as new designs prove themselves both technically and economically.
The biggest factors limiting the growth of nuclear power in the near term are financial and regulatory uncertainties, which result in high interest rates for the upfront capital needed for construction. Concerns also abound about nuclear proliferation and the risks of radioactive materials — some of which could be made into nuclear weapons — falling into the hands of terrorists or rogue governments.
And, while nuclear power is often thought of as zero-emissions, Prinn points out that “it has an energy cost — there’s a huge amount of construction with a huge amount of concrete,” which is a significant source of greenhouse gases.
A bewildering variety of other sources of energy have been discussed. Some, such as fusion power — harnessing the process that powers the sun itself — require significant technological breakthroughs, but could conceivably pay dividends in the very long term.
Others have inherent limits that will, for the foreseeable future, make them much smaller contributors to energy production. For example, the power of waves and tides is a potential energy source, with the world’s oceans producing a total of 3.75 terawatts of tidal power. But, practically speaking, the most that could ever be captured for human use is far less than one terawatt, Taylor says.
With any energy source, it’s crucial to examine, in great detail, the total process required to harness their power. “Every one of these has an energy or environmental cost,” Prinn says. “Nevertheless, this should not deter their consideration. It should instead spur the research needed to minimize these costs.”
How far can wind power go toward reducing global carbon emissions from electricity production?

With the world’s energy needs growing rapidly, can zero-carbon energy options be scaled up enough to make a significant difference? How much of a dent can these alternatives make in the world’s total energy usage over the next half-century? As the MIT Energy Initiative approaches its fifth anniversary next month, this five-part series takes a broad view of the likely scalable energy candidates.
Of all the zero-carbon energy sources available, wind power is the only one that’s truly cost-competitive today: A 2006 report by the U.S. Energy Information Administration put the total cost for wind-produced electricity at an average of $55.80 per megawatt-hour, compared to $53.10 for coal, $52.50 for natural gas and $59.30 for nuclear power.
As a result, wind turbines are being deployed rapidly in many parts of the United States and around the world. And because of wind’s proven record and its immediate and widespread availability, it’s an energy source that’s seen as having the potential to grow very rapidly.
“Wind is probably one of the most significant renewable energy sources, simply because the technology is mature,” says Paul Sclavounos, an MIT professor of mechanical engineering and naval architecture. “There is no technological risk.”
Globally, 2 percent of electricity now comes from wind, and in some places the rate is much higher: Denmark, the present world leader, gets more than 19 percent of its electricity from wind, and is aiming to boost that number to 50 percent. Some experts estimate wind power could account for 10 to 20 percent of world electricity generation over the next few decades.
Taking a longer-term view, a widely cited 2005 study by researchers at Stanford University projected that wind, if fully harnessed worldwide, could theoretically meet the world’s present energy needs five times over. And a 2010 study by the National Renewable Energy Laboratory found that the United States could get more than 12 times its current electricity consumption from wind alone.
But impressive as these figures may sound, wind power still has a long way to go before it becomes a significant factor in reducing carbon emissions. The potential is there — with abundant wind available for harvesting both on land and, especially, over the oceans — but harnessing that power efficiently will require enormous investments in manufacturing and installation.
So far, installed wind power has the capacity to generate only about 0.2 terawatts (trillions of watts) of energy worldwide — a number that pales in comparison to an average world demand of 14 terawatts, expected to double by 2050. The World Wind Energy Association now projects global wind-power capacity of 1.9 terawatts by 2020.
But that’s peak capacity, and even in the best locations the wind doesn’t blow all the time. In fact, the world’s wind farms operate at an average capacity factor (the percentage of their maximum power that is actually delivered) somewhere between 20 and 40 percent, depending on their location and the technology.
Some analysts are also concerned that widespread deployment of wind power, with its inherently unpredictable swings in output, could stress power grids, forcing the repeated startup and shutdown of other generators to compensate for wind’s variability. Many of the best wind-harvesting sites are far from the areas that most need the power, necessitating significant investment in delivery infrastructure — but building wind farms closer to population centers is controversial because many people object to their appearance and their sounds.
One potential solution to these problems lies offshore. While many wind installations in Europe have been built within a few miles of shore, in shallow water, there is much greater potential more than 20 miles offshore, where winds blow faster and more reliably. Such sites, while still relatively close to consumers, are generally far enough away to be out of sight.
MIT’s Sclavounos has been working on the design of wind turbines for installation far offshore, using floating platforms based on technology used in offshore oilrigs. Such installations along the Eastern Seaboard of the United States could theoretically provide most of the electricity needed for the eastern half of the country. And a study in California showed that platforms off the coast there could provide more than two-thirds of the state’s electricity.
Such floating platforms will be essential if wind is to become a major contributor to reducing global greenhouse gas emissions, says research engineer Stephen Connors, director of the Analysis Group for Regional Energy Alternatives (AGREA) at the MIT Energy Initiative. Wind energy is “never going to get big if you’re limited to relatively shallow, relatively close [offshore] sites,” he says. “If you’re going to have a large impact, you really need floating structures.”
All of the technology needed to install hundreds of floating wind turbines is well established, both from existing near-shore wind farms and from offshore drilling installations. All that’s needed is to put the pieces together in a way that works economically.
But deciding just how to do so is no trivial matter. Sclavounos and his students have been working to optimize designs, using computer simulations to test different combinations of platforms and mooring systems to see how they stand up to wind and waves — as well as how efficiently they can be assembled, transported and installed. One thing is clear: “It won’t be one design for all sites,” Sclavounos says.
In principle, floating structures should be much more economical than wind farms mounted on the seafloor, as in Europe, which require costly construction and assembly. By contrast, the floating platforms could be fully assembled at an onshore facility, then towed into position and anchored. What’s more, the wind is much steadier far offshore: Whereas a really good land-based site can provide a 35 percent capacity factor, an offshore site can yield 45 percent — greatly improving the cost-effectiveness per unit.
There are also concerns about the effects of adding a large amount of intermittent energy production to the national supply. Ron Prinn, co-director of MIT’s Joint Program on the Science and Policy of Global Change, says, “At large scale, there are issues regarding reliability of renewable but intermittent energy sources like wind that will require adding the costs of backup generation or energy storage.”
Exactly how big is offshore wind power’s potential? Nobody really knows for sure, since there’s insufficient data on the strength and variability of offshore winds. “You need to know where and when it’s windy — hour to hour, day to day, season to season and year to year,” Connors says. While such data has been collected on land, there is much less information for points offshore. “It’s a wholly answerable question, but you can’t do it by just brainstorming.”
And the answers might not be what wind power’s advocates want to hear. Some analysts raise questions about how much difference wind power can make. MIT physicist Robert Jaffe says that wind is “excellent in certain niche locations, but overall it’s too diffuse” — that is, too thinly spread out over the planet — to be the major greenhouse gas-curbing technology. “In the long term, solar is the best option” to be sufficiently scaled up to make a big difference, says Jaffe, the Otto (1939) and Jane Morningstar Professor of Physics.
Connors is confident that wind also has a role to play. “This planet is mostly ocean,” he says, “and it’s pretty windy out there.”
Given the enormous scale of worldwide energy use, there are limited options for achieving significant reductions in greenhouse gas emissions.

At any given moment, the world is consuming about 14 terawatts (trillions of watts) of energy — everything from the fuel for our cars and trucks, to wood burned to cook dinner, to coal burned to provide the electricity for our lights, air conditioners and gadgets.
Watts are a measure of the amount of power used at a given instant, for example a typical light bulb uses 60 watts for as long as it’s on. If you leave the bulb on for an hour, it will have used 60 watt-hours, a measure of energy consumption. To put those 14,000,000,000,000 watts in perspective, an average person working at manual labor eight hours a day can expend energy at a sustained rate of about 100 watts. But the average American consumes energy (in all forms) at a rate of about 600 times that much. “So our lifestyle is equivalent to having 600 servants, in terms of direct energy consumption,” says Robert Jaffe, the Otto (1939) and Jane Morningstar Professor of Physics at MIT.
Of that 14 terawatts (TW), about 85 percent comes from fossil fuels. But since world energy use is expected to double by 2050, just maintaining carbon emissions at their present rate would require coming up with about 14 TW of new, non-carbon sources over the next few decades. Reducing emissions — which many climate scientists consider essential to averting catastrophic changes — would require even more.
According to Ernest J. Moniz, the Cecil and Ida Green Distinguished Professor of Physics and Engineering Systems and director of the MIT Energy Initiative, a widely cited 2004 paper in Science introduced the concept of “wedges” that might contribute to carbon-emissions reduction. The term refers to a graph projecting energy use between now and 2050: Wedges are energy-use reductions that could slice away at the triangle between a steadily rising line on this graph — representing a scenario in which no measures are taken to curb energy use — and a horizontal line reflecting a continuation of present levels of energy usage, without increases.
Of course, even eliminating the triangle altogether by holding energy usage at current levels would not reduce the greenhouse gas emissions that have been steadily heating up the planet; it would simply stabilize emissions at present levels, slowing the rate of further growth. But most analyses, such as those by MIT’s Joint Program on the Science and Policy of Global Change, indicate that merely stabilizing emissions still presents a better-than-even chance of triggering a rise in global temperatures of at least 2.3 degrees Celsius by 2100, an amount that could lead to devastating changes in sea level, as well as increased patterns of both flooding and droughts. Preventing such serious consequences, most analysts say, would require not just stabilizing emissions but drastically curtailing them — in other words, finding additional wedges to implement.
In the Science paper, authors Stephen Pacala and Robert Socolow of Princeton University listed 15 possible wedges: energy-saving technologies to chip away at the triangle. (The paper was recently updated by Socolow, in the Bulletin of the Atomic Scientists, to reflect the years that have passed since the initial publication and the lack of any net reductions so far). While there are indeed technologies that can contribute to reductions on the order of terawatts, Moniz says Pacala and Socolow’s analysis is “not necessarily very realistic,” and “they made it sound like implementing one of these wedges is too easy.” In fact, every one of the options has its own difficulties, Moniz says.
But some aspects of bringing about such a drastic reduction are not controversial. “The number one thing is demand reduction, that’s clear,” Moniz says. “Most [scientists] think you need to get more than one wedge” from demand reduction — another way of saying increased efficiency — “because if you don’t, then we’d need a miracle” to achieve the needed reductions in emissions through other means, he says.
In fact, efficiency gains may yield several wedges, corresponding to multiple terawatts saved. That’s not so surprising when you consider that of all the energy consumed in the United States from all sources, some 58 percent is simply lost — that is, not actually used to do anything useful — says Jaffe, who co-teaches an MIT class called “The Physics of Energy.” For example, the typical automobile wastes more than two-thirds of the energy contained in the gasoline it burns, dumping it into the environment as heat.
“U.S. transportation, on average, is about 20 percent efficient,” Jaffe says. “That’s scandalous. There are tremendous savings to be gained,” he says, such as by continuing to raise the requirements for fuel efficiency of vehicles.
But after picking the relatively low-hanging fruit of efficiency, potential solutions for reducing emissions become more complex and less potent. Most of the technologies that are widely discussed as low- or zero-carbon alternatives are limited in their potential impact, at least within the next few decades.
For example, many people talk about a “nuclear renaissance” that could provide electricity with very little greenhouse gas impact. But to get even one terawatt of power from new nuclear plants “ain’t so simple,” Moniz says. The operating costs of new nuclear-plant designs, for example, will have to be proven through years of operating experience before financial markets will be willing to fund such systems on a large scale.
Over the longer run, such technologies may be crucial to meeting the world’s growing energy demands. By the end of this century, global energy needs could be more than triple those of today, says Ron Prinn, the TEPCO Professor of Atmospheric Science and co-director of MIT’s Joint Program on the Science and Policy of Global Change. “Most of that will be driven by the industrialization of China, India and Indonesia,” he explains, as these countries evolve from agrarian to industrialized societies.
Ultimately, Moniz suggests, a non-carbon energy future will likely consist largely of some combination of nuclear power, renewable energy sources and carbon-capture systems that allow fossil fuels to be used with little or no emissions of greenhouse gases. Which of these will dominate in a given area comes down to costs and local conditions.
“No one technology is going to get us into a sustainable energy future,” Jaffe says. Rather, he says, it’s going to take a carefully considered combination of many different approaches, technologies and policies.
With the U.S. backing away from a cap-and-trade system, the EU Emissions Trading System (ETS) stands as a solitary, iconic, and often-criticized outpost for market-based approaches for limiting green house gas emissions. A. Denny Ellerman evaluates the performance and prospects of the EU ETS and consider whether it, and the global trading vision embodied in the Kyoto Protocol, is at a dead end or, despite all the difficulties, is still the way to an effective global climate policy.
In collaboration with Tsinghua University, MIT launches a new research project to analyze the impact of China’s existing and proposed energy and climate policies.
Multiple forecasts suggest that rapidly developing nations such as China will be responsible for most of the growth in carbon dioxide emissions over the next 50 years. This expectation is the driving force behind the formation of a new project involving researchers from MIT and China, known as the China Energy and Climate Project (CECP), which officially launches today.
The China Energy and Climate Project will involve close collaboration and personnel exchange between the MIT Joint Program on the Science and Policy of Global Change and the Institute for Energy, Environment and Economy at Tsinghua University in Beijing, China. In collaboration with the MIT Energy Initiative, the five-year project is based out of MIT and directed by Valerie Karplus PhD ’11, a recent graduate of MIT’s Engineering Systems Division. John Reilly, co-director of the MIT Joint Program on the Science and Policy of Global Change and senior lecturer in the Sloan School of Management, will be a principal investigator.
The goal of the CECP is to analyze the impact of existing and proposed energy and climate policies in China on technology, energy use, the environment and economic welfare by applying — and, where necessary, developing — both quantitative and qualitative analysis tools.
The development and application of such new tools will include both national and regional energy-economic models of China. Growing out of the MIT Joint Program’s Emissions Prediction and Policy Analysis model, these new tools will be informed by three major components: First, researchers will study the behaviors and trends that drive micro-level decisions made by households and firms to better understand supply and demand within energy-intensive sectors. Second, the researchers will analyze specific technology prospects, including electric vehicles, advanced fuels and alternative sources of electricity, to determine China’s technology potential. Finally, current and proposed climate and energy policies in China will be evaluated for environmental and economic impact. These evaluations will be conducted primarily through the use of the models developed for the project, which will be based on similar methods employed in the MIT Joint Program over the last 20 years.
“We are building a strong trans-Pacific research team that brings expertise in economics, engineering and public policy to this exciting new project,” Karplus says. “Both sides are eager to get started, to learn from each other, and to produce rigorous analysis on important policy questions.”
The research carried out at MIT is funded by founding sponsors Eni, ICF International and Shell. The project will present its findings at an annual meeting in Beijing to influential members of the academic, industry and policy communities in China. The project will inform rigorous, transparent analysis of climate and energy policy options in China and its global implications.
Anaerobic digesters provide a win-win opportunity for agriculture and energy.

When thinking about renewable energy sources, images of windmills and solar panels often come to mind. Now add to that picture livestock manure. Researchers from the MIT Joint Program on the Science and Policy of Global Change have found that the implementation of climate policies in the United States could hasten adoption of anaerobic digesters as a source for renewable electricity.
Anaerobic digesters break down organic waste using methane-producing bacteria. This methane can then be captured and burned to generate electricity. But anaerobic digesters have several other benefits besides production of renewable energy.
Traditional livestock-manure-management techniques include storing manure in anaerobic pits or lagoons, which release methane emissions into the atmosphere. In the United States, these emissions account for 26 percent of agricultural emissions of methane, a potent greenhouse gas. Diverting these emissions toward electricity generation thus reduces total U.S. greenhouse gas emissions and may qualify for low-carbon energy subsidies and methane-reduction credits. Anaerobic digesters can also reduce odor and pathogens commonly found in manure storage and digested manure can be applied to crops as a fertilizer.
In collaboration with the University of Wisconsin, researchers used the MIT Emissions Prediction and Policy Analysis model to test the effects of a representative U.S. climate policy on the adoption of anaerobic digesters. Currently, support for anaerobic digesters has been limited and the economic value of most systems is insufficient to promote widespread adoption.
The lack of widespread use of anaerobic digesters is not due to lack of availability; the researchers estimate that cattle, swine and poultry manure deposited in lagoons or pits currently has the potential to produce 11,000 megawatts of electricity. (For scale, one megawatt can power 1,000 homes for one instant.) The main reason for the lack of anaerobic digesters is that they compete with electricity from cheaper, traditional sources. However, under a climate policy that puts a price on all emissions, electricity produced from fossil fuels becomes more expensive, and low-carbon energy sources become more competitive.
The study found that, under a representative climate policy, anaerobic digesters are introduced in 2025 when the price of carbon-dioxide equivalent, or CO2e, is $76 per ton. (CO2e refers to the concentration of carbon dioxide that would cause the same amount of radiative forcing as a given greenhouse gas. Because different greenhouse gases have different global warming potentials, carbon dioxide is used as a reference gas to standardize the quantification of multiple greenhouse gas emissions.) By 2050, use of anaerobic digesters would contribute 5.5 percent of national electricity generation and would mitigate 151 million metric tons of CO2e, mostly from methane abatement. These mitigated emissions would also allow the livestock operations to sell emissions permits, adding economic value to the process.
Overall, the researchers identified a win-win situation, where incentives to reduce greenhouse gases would result in both market benefits (cheaper energy generation and sale of emissions credits) and non-market co-benefits (environmental and health gains, fertilizer uses) from adoption of anaerobic-digester operations. Such incentives, in the form of climate policies that provide methane-reduction credits and increase the costs of electricity from fossil fuels, provide the opportunity for a novel linkage between agriculture and energy production.
Readily available manure resources can contribute over 11000 MW of electricity generation potential. Each colored grid cell can support an anaerobic digester of a given capacity.
Competing demands for food, fuels, and forests
How do you value an ecosystem? Putting a dollar value on natural systems such as forests has long beset economists.
Forests provide “non-use values,” such as the pleasure of knowing that a natural system exists, and recreational values, such as hunting, fishing and wildlife viewing. But recently, ecologists have also sought to value a broader set of “ecosystem services,” or the goods and services that ecosystems provide to a market economy.
Ecosystem services related to land include conventional food and forest products, as well as the potential to produce biofuels. But ecosystems also have the ability to store carbon. If a price on carbon were established, an incentive to enhance carbon storage would be created. This new ecosystem service would need to be balanced against conventional food, forestry and biofuels production services. As the number of ecosystem services expand and are fully priced in a market, the demand for land naturally increases.
Researchers from the MIT Joint Program on the Science and Policy of Global Change have used an economic model to explicitly represent recreation value of ecosystems and their carbon storage value. The study examines how demand for ecosystem services will affect land use, food prices and the prospects for biofuels production.
Their study found that growth in demand for biofuels increases when a carbon tax is implemented, leading to increases in CO2 emissions from the conversion of forests to cropland. However, if that carbon tax also includes emissions from land use change, the resulting economic incentive is enough to avoid deforestation. And, if a tradeable credit program to incentivize CO2 sequestration on land is implemented, significant reforestation occurs, such that land use becomes a large net sink for CO2.
This is a surprising result, as land use emissions currently make up about 20 percent of total emissions. But, with carbon taxes and a tradeable credit program, land use would mitigate emissions by storing carbon in forests and replacing fossil fuels with biofuels. In fact, the analysis shows that if carbon storage were credited, land conversion would eventually store as much as one third of the entire global energy emissions over the coming century.
Unfortunately, it’s not that simple — such policies would imply some difficult tradeoffs. In the scenario with full carbon pricing, substantial reforestation and biofuels production occurs, but at the expense of conventional agricultural products. The two new non-food demands for land cause commodity prices to increase, especially impacting livestock prices. The livestock sector is particularly affected because both the rental prices for grazing land and the price of grains used to feed livestock rise. As food prices rise, poor consumers will be considerably affected and may suffer.
“Since conventional agricultural goods are priced in markets, the higher [food] prices projected are efficient in the sense that they reflect the marginal value of storing carbon that would be lost if more land were devoted to food production,” explains John Reilly, co-director of the MIT Joint Program and co-author of the study. He adds, “However, the market values do not take into account equity considerations, and so in the absence of food programs worldwide such higher prices would place a disproportionate burden on lower income people.“
Some of the resulting increase in food prices may be offset by future agricultural technology. But even with such technologies, increasing food prices would still be a substantial departure from the historical trend of falling food prices. As new demands for land stem from an expanded view of ecosystem services, special attention will be needed to counteract the impacts on development and food security.
“It is a dilemma where climate change itself may have negative consequences for food production but extensive reforestation to limit climate change may also squeeze food production by limiting the land available for conventional agriculture.
Thrown on top is a demand for land for biofuels production that could put further pressure on food prices,” Reilly says. “The results are a cautionary tale in embracing efficient market solutions in a world where there are not ready mechanisms to deal with inequitable outcomes.”
MIT researchers improve upon methods to model atmospheric aerosols.
Allison Crimmins
Urban regions account for an ever increasing fraction of Earth’s population, and are consequently an ever increasing source of air pollutants. These pollutants include anthropogenic aerosols, which have important climate and health implications. But modeling aerosol emissions from urban areas is difficult due to the detailed temporal and spatial scales required. Thus, urban areas significantly contribute to the overall uncertainty and variability in global atmospheric model predictions of aerosol and pollutant distribution.
To address these uncertainties, researchers from the MIT Joint Program on the Science and Policy of Global Change set out to see if they could better model aerosol emissions and distribution from urban regions. To accurately model urban areas, factors such as the amount and distribution of emissions, the meteorological and geographical properties of the region, and the chemical and physical processing of emissions over time would need to be considered on spatial and temporal scales much smaller than global models. Previously, modelers have attempted to account for urban aerosol emissions by using a correction factor, which diluted total aerosol emissions across global model grid cells. This dilution method, however, does not capture the heterogeneity of urban and non-urban areas within each grid cell.
Instead, the MIT researchers developed a new detailed air quality model, using meteorological and emissions data from 16 representative urban areas. This urban processing model examined seven different types of aerosols of different sizes and composition, and modeled a total of 251 urban areas, including 91 from China, 36 from India, 50 from developed nations (Australia, Canada, EU, Japan, Singapore, South Korea, US) and 74 from developing nations. The urban processing model was then included into a larger global model that simulates atmospheric chemistry and transport at regional to global scales. Researchers compared the predicted atmospheric aerosol concentrations using this new method with results from the dilution method.
“Not only are we the first group to successfully incorporate an urban-scale chemical processing model into a 3-dimensional global model,” explains Dr. Jason Cohen, the lead author on the report, “but our results resolve important processes which the rest of the modeling community still neglects to include”.
The study found that the urban processing model predicted a lower concentration of atmospheric aerosols than the dilution method, particularly in the Northern Hemisphere and in the summer season. In addition, the urban processing model showed increased concentrations of primary aerosols, like black carbon and organic carbon, and decreased concentrations of secondary aerosols, like sulfates. Thus excluding the urban processing model could lead to an overestimation of some aerosols and an underestimation of others.
The reason these biases exist in the dilution method is that urban areas tend to be more efficient at oxidizing and removing substances like black carbon and organic carbon from the atmosphere— not taking this into consideration leads to an overestimation of the concentration of these species. Because these aerosol species are oxidized, generation of the secondary aerosol species actually increase in urban areas— not taking this into consideration leads to an underestimation of the concentration of those species.
Aerosols tend to cause negative radiative forcing. In other words, they have an overall “cooling effect” on the global climate. But using the urban processing method instead of the dilution method demonstrated an overall smaller concentration of aerosols in the atmosphere. Thus the detailed urban processing model predicts significantly less negative aerosol radiative forcing (less cooling) than the dilution method.
“We are continuing this effort, looking at the long-term climate effects of using detailed urban processing, such as how average surface temperature, precipitation, and cloud cover will be impacted,” says Cohen. “We hope that as we continue to look into the impacts of this new methodology and continue to learn more about the mistakes that the dilution simplification have led to, that others in the climate modeling community will adopt and use our new standard."
See also "Development of a fast, urban chemistry metamodel for inclusion in global models" (PDF)
By JUSTIN GILLIS
The scale of Hurricane Irene, which could cause more extensive damage along the Eastern Seaboard than any storm in decades, is reviving an old question: are hurricanes getting worse because of human-induced climate change?
The short answer from scientists is that they are still trying to figure it out. But many of them do believe that hurricanes will get more intense as the planet warms, and they see large hurricanes like Irene as a harbinger.
While the number of the most intense storms has clearly been rising since the 1970s, researchers have come to differing conclusions about whether that increase can be attributed to human activities.
“On a longer time scale, I think — but not all of my colleagues agree — that the evidence for a connection between Atlantic hurricanes and global climate change is fairly compelling,” said Kerry Emanuel, an expert on the issue at the Massachusetts Institute of Technology.
Among those who disagree is Thomas R. Knutson, a federal researcher at the government’s Geophysical Fluid Dynamics Laboratory in Princeton, N.J. The rising trend of recent decades occurred over too short a period to be sure it was not a consequence of natural variability, he said, and statistics from earlier years are not reliable enough to draw firm conclusions about any long-term trend in hurricane intensities.
“Everyone sort of agrees on this short-term trend, but then the agreement starts to break down when you go back longer-term,” Mr. Knutson said. He argues, essentially, that Dr. Emanuel’s conclusion is premature, though he adds that evidence for a human impact on hurricanes could eventually be established.
While scientists from both camps tend to think hurricanes are likely to intensify, they do not have great confidence in their ability to project the magnitude of that increase.
One climate-change projection, prepared by Mr. Knutson’s group, is that the annual number of the most intense storms will double over the course of the 21st century. But what proportion of those would actually hit land is another murky issue. Scientists say climate change could alter steering currents or other traits of the atmosphere that influence hurricane behavior.
Storms are one of nature’s ways of moving heat around, and high temperatures at the ocean surface tend to feed hurricanes and make them stronger. That appears to be a prime factor in explaining the power of Hurricane Irene, since temperatures in the Atlantic are well above their long-term average for this time of year.
The ocean has been getting warmer for decades, and most climate scientists say it is because greenhouse gases are trapping extra heat. Rising sea-surface temperatures are factored into both Mr. Knutson’s and Dr. Emanuel’s analyses, but they disagree on the effect that warming in remote areas of the tropics will have on Atlantic hurricanes.
Air temperatures are also rising because of greenhouse gases, scientists say. That causes land ice to melt, one of several factors leading to a rise in sea level. That increase, in turn, is making coastlines more vulnerable to damage from the storm surges that can accompany powerful hurricanes.
Overall damage from hurricanes has skyrocketed in recent decades, but most experts agree that is mainly due to excessive development along vulnerable coastlines.
In a statement five years ago, Dr. Emanuel, Mr. Knutson and eight colleagues called this “the main hurricane problem facing the United States,” and they pleaded for a reassessment of policies that subsidize coastal development — a reassessment that has not happened.
“We are optimistic that continued research will eventually resolve much of the current controversy over the effect of climate change on hurricanes,” they wrote at the time. “But the more urgent problem of our lemming-like march to the sea requires immediate and sustained attention.”