News + Media

power options
MIT News

Given the enormous scale of worldwide energy use, there are limited options for achieving significant reductions in greenhouse gas emissions.

power options

At any given moment, the world is consuming about 14 terawatts (trillions of watts) of energy — everything from the fuel for our cars and trucks, to wood burned to cook dinner, to coal burned to provide the electricity for our lights, air conditioners and gadgets.

Watts are a measure of the amount of power used at a given instant, for example a typical light bulb uses 60 watts for as long as it’s on. If you leave the bulb on for an hour, it will have used 60 watt-hours, a measure of energy consumption. To put those 14,000,000,000,000 watts in perspective, an average person working at manual labor eight hours a day can expend energy at a sustained rate of about 100 watts. But the average American consumes energy (in all forms) at a rate of about 600 times that much. “So our lifestyle is equivalent to having 600 servants, in terms of direct energy consumption,” says Robert Jaffe, the Otto (1939) and Jane Morningstar Professor of Physics at MIT.

Of that 14 terawatts (TW), about 85 percent comes from fossil fuels. But since world energy use is expected to double by 2050, just maintaining carbon emissions at their present rate would require coming up with about 14 TW of new, non-carbon sources over the next few decades. Reducing emissions — which many climate scientists consider essential to averting catastrophic changes — would require even more.

According to Ernest J. Moniz, the Cecil and Ida Green Distinguished Professor of Physics and Engineering Systems and director of the MIT Energy Initiative, a widely cited 2004 paper in Science introduced the concept of “wedges” that might contribute to carbon-emissions reduction. The term refers to a graph projecting energy use between now and 2050: Wedges are energy-use reductions that could slice away at the triangle between a steadily rising line on this graph — representing a scenario in which no measures are taken to curb energy use — and a horizontal line reflecting a continuation of present levels of energy usage, without increases.

Of course, even eliminating the triangle altogether by holding energy usage at current levels would not reduce the greenhouse gas emissions that have been steadily heating up the planet; it would simply stabilize emissions at present levels, slowing the rate of further growth. But most analyses, such as those by MIT’s Joint Program on the Science and Policy of Global Change, indicate that merely stabilizing emissions still presents a better-than-even chance of triggering a rise in global temperatures of at least 2.3 degrees Celsius by 2100, an amount that could lead to devastating changes in sea level, as well as increased patterns of both flooding and droughts. Preventing such serious consequences, most analysts say, would require not just stabilizing emissions but drastically curtailing them — in other words, finding additional wedges to implement.

In the Science paper, authors Stephen Pacala and Robert Socolow of Princeton University listed 15 possible wedges: energy-saving technologies to chip away at the triangle. (The paper was recently updated by Socolow, in the Bulletin of the Atomic Scientists, to reflect the years that have passed since the initial publication and the lack of any net reductions so far). While there are indeed technologies that can contribute to reductions on the order of terawatts, Moniz says Pacala and Socolow’s analysis is “not necessarily very realistic,” and “they made it sound like implementing one of these wedges is too easy.” In fact, every one of the options has its own difficulties, Moniz says.

But some aspects of bringing about such a drastic reduction are not controversial. “The number one thing is demand reduction, that’s clear,” Moniz says. “Most [scientists] think you need to get more than one wedge” from demand reduction — another way of saying increased efficiency — “because if you don’t, then we’d need a miracle” to achieve the needed reductions in emissions through other means, he says.

In fact, efficiency gains may yield several wedges, corresponding to multiple terawatts saved. That’s not so surprising when you consider that of all the energy consumed in the United States from all sources, some 58 percent is simply lost — that is, not actually used to do anything useful — says Jaffe, who co-teaches an MIT class called “The Physics of Energy.” For example, the typical automobile wastes more than two-thirds of the energy contained in the gasoline it burns, dumping it into the environment as heat.

“U.S. transportation, on average, is about 20 percent efficient,” Jaffe says. “That’s scandalous. There are tremendous savings to be gained,” he says, such as by continuing to raise the requirements for fuel efficiency of vehicles.

But after picking the relatively low-hanging fruit of efficiency, potential solutions for reducing emissions become more complex and less potent. Most of the technologies that are widely discussed as low- or zero-carbon alternatives are limited in their potential impact, at least within the next few decades.

For example, many people talk about a “nuclear renaissance” that could provide electricity with very little greenhouse gas impact. But to get even one terawatt of power from new nuclear plants “ain’t so simple,” Moniz says. The operating costs of new nuclear-plant designs, for example, will have to be proven through years of operating experience before financial markets will be willing to fund such systems on a large scale.

Over the longer run, such technologies may be crucial to meeting the world’s growing energy demands. By the end of this century, global energy needs could be more than triple those of today, says Ron Prinn, the TEPCO Professor of Atmospheric Science and co-director of MIT’s Joint Program on the Science and Policy of Global Change. “Most of that will be driven by the industrialization of China, India and Indonesia,” he explains, as these countries evolve from agrarian to industrialized societies.

Ultimately, Moniz suggests, a non-carbon energy future will likely consist largely of some combination of nuclear power, renewable energy sources and carbon-capture systems that allow fossil fuels to be used with little or no emissions of greenhouse gases. Which of these will dominate in a given area comes down to costs and local conditions.

“No one technology is going to get us into a sustainable energy future,” Jaffe says. Rather, he says, it’s going to take a carefully considered combination of many different approaches, technologies and policies.

Video

With the U.S. backing away from a cap-and-trade system, the EU Emissions Trading System (ETS) stands as a solitary, iconic, and often-criticized outpost for market-based approaches for limiting green house gas emissions. A. Denny Ellerman evaluates the performance and prospects of the EU ETS and consider whether it, and the global trading vision embodied in the Kyoto Protocol, is at a dead end or, despite all the difficulties, is still the way to an effective global climate policy.

MIT News

In collaboration with Tsinghua University, MIT launches a new research project to analyze the impact of China’s existing and proposed energy and climate policies.

CECP Launch

Multiple forecasts suggest that rapidly developing nations such as China will be responsible for most of the growth in carbon dioxide emissions over the next 50 years. This expectation is the driving force behind the formation of a new project involving researchers from MIT and China, known as the China Energy and Climate Project (CECP), which officially launches today.

The China Energy and Climate Project will involve close collaboration and personnel exchange between the MIT Joint Program on the Science and Policy of Global Change and the Institute for Energy, Environment and Economy at Tsinghua University in Beijing, China. In collaboration with the MIT Energy Initiative, the five-year project is based out of MIT and directed by Valerie Karplus PhD ’11, a recent graduate of MIT’s Engineering Systems Division. John Reilly, co-director of the MIT Joint Program on the Science and Policy of Global Change and senior lecturer in the Sloan School of Management, will be a principal investigator.

The goal of the CECP is to analyze the impact of existing and proposed energy and climate policies in China on technology, energy use, the environment and economic welfare by applying — and, where necessary, developing — both quantitative and qualitative analysis tools.

The development and application of such new tools will include both national and regional energy-economic models of China. Growing out of the MIT Joint Program’s Emissions Prediction and Policy Analysis model, these new tools will be informed by three major components: First, researchers will study the behaviors and trends that drive micro-level decisions made by households and firms to better understand supply and demand within energy-intensive sectors. Second, the researchers will analyze specific technology prospects, including electric vehicles, advanced fuels and alternative sources of electricity, to determine China’s technology potential. Finally, current and proposed climate and energy policies in China will be evaluated for environmental and economic impact. These evaluations will be conducted primarily through the use of the models developed for the project, which will be based on similar methods employed in the MIT Joint Program over the last 20 years.

“We are building a strong trans-Pacific research team that brings expertise in economics, engineering and public policy to this exciting new project,” Karplus says. “Both sides are eager to get started, to learn from each other, and to produce rigorous analysis on important policy questions.”

The research carried out at MIT is funded by founding sponsors Eni, ICF International and Shell. The project will present its findings at an annual meeting in Beijing to influential members of the academic, industry and policy communities in China. The project will inform rigorous, transparent analysis of climate and energy policy options in China and its global implications.

Cow
MIT News

Anaerobic digesters provide a win-win opportunity for agriculture and energy.

Cow

When thinking about renewable energy sources, images of windmills and solar panels often come to mind. Now add to that picture livestock manure. Researchers from the MIT Joint Program on the Science and Policy of Global Change have found that the implementation of climate policies in the United States could hasten adoption of anaerobic digesters as a source for renewable electricity.

Anaerobic digesters break down organic waste using methane-producing bacteria. This methane can then be captured and burned to generate electricity. But anaerobic digesters have several other benefits besides production of renewable energy.

Traditional livestock-manure-management techniques include storing manure in anaerobic pits or lagoons, which release methane emissions into the atmosphere. In the United States, these emissions account for 26 percent of agricultural emissions of methane, a potent greenhouse gas. Diverting these emissions toward electricity generation thus reduces total U.S. greenhouse gas emissions and may qualify for low-carbon energy subsidies and methane-reduction credits. Anaerobic digesters can also reduce odor and pathogens commonly found in manure storage and digested manure can be applied to crops as a fertilizer.

In collaboration with the University of Wisconsin, researchers used the MIT Emissions Prediction and Policy Analysis model to test the effects of a representative U.S. climate policy on the adoption of anaerobic digesters. Currently, support for anaerobic digesters has been limited and the economic value of most systems is insufficient to promote widespread adoption.

The lack of widespread use of anaerobic digesters is not due to lack of availability; the researchers estimate that cattle, swine and poultry manure deposited in lagoons or pits currently has the potential to produce 11,000 megawatts of electricity. (For scale, one megawatt can power 1,000 homes for one instant.) The main reason for the lack of anaerobic digesters is that they compete with electricity from cheaper, traditional sources. However, under a climate policy that puts a price on all emissions, electricity produced from fossil fuels becomes more expensive, and low-carbon energy sources become more competitive.

The study found that, under a representative climate policy, anaerobic digesters are introduced in 2025 when the price of carbon-dioxide equivalent, or CO2e, is $76 per ton. (CO2e refers to the concentration of carbon dioxide that would cause the same amount of radiative forcing as a given greenhouse gas. Because different greenhouse gases have different global warming potentials, carbon dioxide is used as a reference gas to standardize the quantification of multiple greenhouse gas emissions.) By 2050, use of anaerobic digesters would contribute 5.5 percent of national electricity generation and would mitigate 151 million metric tons of CO2e, mostly from methane abatement. These mitigated emissions would also allow the livestock operations to sell emissions permits, adding economic value to the process.

Overall, the researchers identified a win-win situation, where incentives to reduce greenhouse gases would result in both market benefits (cheaper energy generation and sale of emissions credits) and non-market co-benefits (environmental and health gains, fertilizer uses) from adoption of anaerobic-digester operations. Such incentives, in the form of climate policies that provide methane-reduction credits and increase the costs of electricity from fossil fuels, provide the opportunity for a novel linkage between agriculture and energy production.
 

AD availability


Readily available manure resources can contribute over 11000 MW of electricity generation potential. Each colored grid cell can support an anaerobic digester of a given capacity.

Joint Program Logo
MIT News

Competing demands for food, fuels, and forests

How do you value an ecosystem? Putting a dollar value on natural systems such as forests has long beset economists.

Forests provide “non-use values,” such as the pleasure of knowing that a natural system exists, and recreational values, such as hunting, fishing and wildlife viewing. But recently, ecologists have also sought to value a broader set of “ecosystem services,” or the goods and services that ecosystems provide to a market economy.

Ecosystem services related to land include conventional food and forest products, as well as the potential to produce biofuels. But ecosystems also have the ability to store carbon. If a price on carbon were established, an incentive to enhance carbon storage would be created. This new ecosystem service would need to be balanced against conventional food, forestry and biofuels production services. As the number of ecosystem services expand and are fully priced in a market, the demand for land naturally increases.

Researchers from the MIT Joint Program on the Science and Policy of Global Change have used an economic model to explicitly represent recreation value of ecosystems and their carbon storage value. The study examines how demand for ecosystem services will affect land use, food prices and the prospects for biofuels production.

Their study found that growth in demand for biofuels increases when a carbon tax is implemented, leading to increases in CO2 emissions from the conversion of forests to cropland. However, if that carbon tax also includes emissions from land use change, the resulting economic incentive is enough to avoid deforestation. And, if a tradeable credit program to incentivize CO2 sequestration on land is implemented, significant reforestation occurs, such that land use becomes a large net sink for CO2.

This is a surprising result, as land use emissions currently make up about 20 percent of total emissions. But, with carbon taxes and a tradeable credit program, land use would mitigate emissions by storing carbon in forests and replacing fossil fuels with biofuels. In fact, the analysis shows that if carbon storage were credited, land conversion would eventually store as much as one third of the entire global energy emissions over the coming century.

Unfortunately, it’s not that simple — such policies would imply some difficult tradeoffs. In the scenario with full carbon pricing, substantial reforestation and biofuels production occurs, but at the expense of conventional agricultural products. The two new non-food demands for land cause commodity prices to increase, especially impacting livestock prices. The livestock sector is particularly affected because both the rental prices for grazing land and the price of grains used to feed livestock rise. As food prices rise, poor consumers will be considerably affected and may suffer.

“Since conventional agricultural goods are priced in markets, the higher [food] prices projected are efficient in the sense that they reflect the marginal value of storing carbon that would be lost if more land were devoted to food production,” explains John Reilly, co-director of the MIT Joint Program and co-author of the study. He adds, “However, the market values do not take into account equity considerations, and so in the absence of food programs worldwide such higher prices would place a disproportionate burden on lower income people.“

Some of the resulting increase in food prices may be offset by future agricultural technology. But even with such technologies, increasing food prices would still be a substantial departure from the historical trend of falling food prices. As new demands for land stem from an expanded view of ecosystem services, special attention will be needed to counteract the impacts on development and food security.

“It is a dilemma where climate change itself may have negative consequences for food production but extensive reforestation to limit climate change may also squeeze food production by limiting the land available for conventional agriculture.

Thrown on top is a demand for land for biofuels production that could put further pressure on food prices,” Reilly says. “The results are a cautionary tale in embracing efficient market solutions in a world where there are not ready mechanisms to deal with inequitable outcomes.”

Joint Program Logo
MIT News

MIT researchers improve upon methods to model atmospheric aerosols.
Allison Crimmins

Urban regions account for an ever increasing fraction of Earth’s population, and are consequently an ever increasing source of air pollutants. These pollutants include anthropogenic aerosols, which have important climate and health implications. But modeling aerosol emissions from urban areas is difficult due to the detailed temporal and spatial scales required. Thus, urban areas significantly contribute to the overall uncertainty and variability in global atmospheric model predictions of aerosol and pollutant distribution.

To address these uncertainties, researchers from the MIT Joint Program on the Science and Policy of Global Change set out to see if they could better model aerosol emissions and distribution from urban regions. To accurately model urban areas, factors such as the amount and distribution of emissions, the meteorological and geographical properties of the region, and the chemical and physical processing of emissions over time would need to be considered on spatial and temporal scales much smaller than global models. Previously, modelers have attempted to account for urban aerosol emissions by using a correction factor, which diluted total aerosol emissions across global model grid cells. This dilution method, however, does not capture the heterogeneity of urban and non-urban areas within each grid cell.

Instead, the MIT researchers developed a new detailed air quality model, using meteorological and emissions data from 16 representative urban areas. This urban processing model examined seven different types of aerosols of different sizes and composition, and modeled a total of 251 urban areas, including 91 from China, 36 from India, 50 from developed nations (Australia, Canada, EU, Japan, Singapore, South Korea, US) and 74 from developing nations. The urban processing model was then included into a larger global model that simulates atmospheric chemistry and transport at regional to global scales. Researchers compared the predicted atmospheric aerosol concentrations using this new method with results from the dilution method.

“Not only are we the first group to successfully incorporate an urban-scale chemical processing model into a 3-dimensional global model,” explains Dr. Jason Cohen, the lead author on the report, “but our results resolve important processes which the rest of the modeling community still neglects to include”.

The study found that the urban processing model predicted a lower concentration of atmospheric aerosols than the dilution method, particularly in the Northern Hemisphere and in the summer season. In addition, the urban processing model showed increased concentrations of primary aerosols, like black carbon and organic carbon, and decreased concentrations of secondary aerosols, like sulfates. Thus excluding the urban processing model could lead to an overestimation of some aerosols and an underestimation of others.

The reason these biases exist in the dilution method is that urban areas tend to be more efficient at oxidizing and removing substances like black carbon and organic carbon from the atmosphere— not taking this into consideration leads to an overestimation of the concentration of these species. Because these aerosol species are oxidized, generation of the secondary aerosol species actually increase in urban areas— not taking this into consideration leads to an underestimation of the concentration of those species.

Aerosols tend to cause negative radiative forcing. In other words, they have an overall “cooling effect” on the global climate. But using the urban processing method instead of the dilution method demonstrated an overall smaller concentration of aerosols in the atmosphere. Thus the detailed urban processing model predicts significantly less negative aerosol radiative forcing (less cooling) than the dilution method.

“We are continuing this effort, looking at the long-term climate effects of using detailed urban processing, such as how average surface temperature, precipitation, and cloud cover will be impacted,” says Cohen. “We hope that as we continue to look into the impacts of this new methodology and continue to learn more about the mistakes that the dilution simplification have led to, that others in the climate modeling community will adopt and use our new standard."

See also "Development of a fast, urban chemistry metamodel for inclusion in global models" (PDF)

Joint Program Logo
New York Times

By JUSTIN GILLIS

The scale of Hurricane Irene, which could cause more extensive damage along the Eastern Seaboard than any storm in decades, is reviving an old question: are hurricanes getting worse because of human-induced climate change?

The short answer from scientists is that they are still trying to figure it out. But many of them do believe that hurricanes will get more intense as the planet warms, and they see large hurricanes like Irene as a harbinger.

While the number of the most intense storms has clearly been rising since the 1970s, researchers have come to differing conclusions about whether that increase can be attributed to human activities.

“On a longer time scale, I think — but not all of my colleagues agree — that the evidence for a connection between Atlantic hurricanes and global climate change is fairly compelling,” said Kerry Emanuel, an expert on the issue at the Massachusetts Institute of Technology.

Among those who disagree is Thomas R. Knutson, a federal researcher at the government’s Geophysical Fluid Dynamics Laboratory in Princeton, N.J. The rising trend of recent decades occurred over too short a period to be sure it was not a consequence of natural variability, he said, and statistics from earlier years are not reliable enough to draw firm conclusions about any long-term trend in hurricane intensities.

“Everyone sort of agrees on this short-term trend, but then the agreement starts to break down when you go back longer-term,” Mr. Knutson said. He argues, essentially, that Dr. Emanuel’s conclusion is premature, though he adds that evidence for a human impact on hurricanes could eventually be established.

While scientists from both camps tend to think hurricanes are likely to intensify, they do not have great confidence in their ability to project the magnitude of that increase.

One climate-change projection, prepared by Mr. Knutson’s group, is that the annual number of the most intense storms will double over the course of the 21st century. But what proportion of those would actually hit land is another murky issue. Scientists say climate change could alter steering currents or other traits of the atmosphere that influence hurricane behavior.

Storms are one of nature’s ways of moving heat around, and high temperatures at the ocean surface tend to feed hurricanes and make them stronger. That appears to be a prime factor in explaining the power of Hurricane Irene, since temperatures in the Atlantic are well above their long-term average for this time of year.

The ocean has been getting warmer for decades, and most climate scientists say it is because greenhouse gases are trapping extra heat. Rising sea-surface temperatures are factored into both Mr. Knutson’s and Dr. Emanuel’s analyses, but they disagree on the effect that warming in remote areas of the tropics will have on Atlantic hurricanes.

Air temperatures are also rising because of greenhouse gases, scientists say. That causes land ice to melt, one of several factors leading to a rise in sea level. That increase, in turn, is making coastlines more vulnerable to damage from the storm surges that can accompany powerful hurricanes.

Overall damage from hurricanes has skyrocketed in recent decades, but most experts agree that is mainly due to excessive development along vulnerable coastlines.

In a statement five years ago, Dr. Emanuel, Mr. Knutson and eight colleagues called this “the main hurricane problem facing the United States,” and they pleaded for a reassessment of policies that subsidize coastal development — a reassessment that has not happened.

“We are optimistic that continued research will eventually resolve much of the current controversy over the effect of climate change on hurricanes,” they wrote at the time. “But the more urgent problem of our lemming-like march to the sea requires immediate and sustained attention.”

Hurricane Irene
CNN

By Kerry Emanuel, Special to CNN
August 25, 2011 11:45 p.m. EDT

Editor's note: Kerry Emanuel is a professor of meteorology at the Massachusetts Institute of Technology.

Hurricane Irene

(CNN) -- At this moment, Hurricane Irene poses a risk to almost everyone living along the Eastern Seaboard, from Florida to the Canadian Maritimes. Where will Irene track? Which communities will be affected and how badly? Millions of lives and billions of dollars are at stake in decisions made by forecasters, emergency managers and all of us who live in or own property in harm's way.

It is natural to wonder how good the forecasts are likely to be. To what extent can we trust the National Hurricane Center, local professional forecasters and emergency managers to tell us what will happen and what to do? Undeniably, enormous progress has been made in the skill with which hurricanes and other weather phenomena are predicted. Satellites and reconnaissance aircraft monitor every hurricane that threatens the U.S., collecting invaluable data that are fed into computer models whose capacity to simulate weather is one of the great wonders of modern science and technology.

And the human effort and taxpayer funds that have been invested in this endeavor have paid off handsomely: A three-day hurricane track forecast today is as skillful as a one-day forecast was just 30 years ago. This gives everyone more time to respond to the multiple threats that hurricanes pose.

And yet there are still things we don't know.

For example, we do not know for sure whether Irene will make landfall in the Carolinas, on Long Island, or in New England, or stay far enough offshore to deliver little more than a windy, rainy day to East Coast residents. Nor do we have better than a passing ability to forecast how strong Irene will get. In spite of decades of research and greatly improved observations and computer models, our skill in forecasting hurricane strength is little better than it was decades ago. Why is this so, and how should we go about making decisions in the context of uncertain forecasts?

Since the pioneering work of Edward N. Lorenz in the early 1960s, we have known that weather, including hurricanes, is an example of a chaotic process. Small fluctuations (Lorenz's "butterfly effect") that cannot be detected can quickly amplify and completely change the outcome in just a few days. Lorenz's key insight was that even in principle, one cannot forecast the evolution of some kinds of chaotic systems beyond some time horizon.

In the case of weather, meteorologists think that time horizon is around two weeks or so. Add to this fundamental limitation that we measure the atmosphere imperfectly, sparsely and not often enough, and that our computer models are imperfect, and you arrive at the circumstance that everyone knows from experience: weather forecasts are not completely reliable, and their reliability deteriorates rapidly the further out in time the forecast is made. A forecast for a week from today is dicey at best, and no one even tries to forecast two weeks out. But in the past decade or two, meteorologists have made another important advance of which few outside our profession are aware: We have learned to quantify just how uncertain any given forecast is.

This is significant, because the degree of uncertainty itself varies greatly from one day to the next. On one occasion, we might be able to forecast a blizzard five days out with great confidence; on another, we might have very little faith in tomorrow's forecast.

We estimate the level of confidence in a particular forecast by running many different computer models many times, not just once. Each time we run it, we feed it a slightly different but equally plausible estimate of the current state of the atmosphere, given that our observations are few, far between and imperfect. In each case, we get a different answer; the differences are typically small to begin with but can grow rapidly so that by a week or so, the difference between any two forecasts is as great as the difference between any two arbitrary states of the weather at that time of year. No point in going any further!

But we observe that sometimes and in some places, the differences grow slowly, while at other times and places, they may grow much more rapidly. And by using different computer models, we can take into account our imperfect understanding of the physics of the atmosphere. By these means, we can state with some accuracy how confident we are in any particular forecast for any particular time and place. Today, one of the greatest challenges faced by weather forecasters is how best to convey their estimates of forecast confidence to the public.

Ideally, we would like to be able to say with full scientific backing something like "the odds of hurricane force winds in New York City sometime between Friday and Sunday are 20%." We have far to go to perfect these, but probabilistic statements like this are the best for which we can hope.

We know from experience that everyone will deal with such probabilistic forecasts in their own way: People have a very broad range of risk aversion. But the next time you are inclined to criticize weather forecasters for assigning probabilities to their forecasts, remember this essay and consider how much better off you are than with other types of forecasters you rely on. Your stockbroker, for example. The opinions expressed in this commentary are solely those of Kerry Emanuel.

Joint Program Logo
Environmental Research Letters News

Nadya Anscombe

 

If wind power is going to meet 20% of our predicted energy needs in 2100, millions of wind turbines must be installed around the globe. Modelling performed by researchers at Massachusetts Institute of Technology, US, has shown that these vast wind farms, if installed in offshore regions, could reduce the temperature of the lower atmosphere above the site by 1 °C.

 

This is in contrast to earlier work that found that a land-based deployment of wind turbines large enough to meet one-fifth of predicted world energy needs in 2100 could lead to a significant temperature increase in the lower atmosphere over the installed regions.

Chien Wang and Ronald Prinn say their findings show how important it is that rigorous scientific assessments are made before deployment of large-scale wind farms. "We were surprised by our findings at first but we soon realized that the cooling we predicted is due principally to the enhanced latent heat flux from the sea surface to the lower atmosphere," Wang told environmentalresearchweb.

Wang and Prinn found that the effect varied depending on the location of the wind farm. In tropical and mid-latitude sites, the temperature of the lower atmosphere was reduced by up to 1 °C, whereas even greater reductions were seen in the high-latitude sites.

The consequences of such a temperature change are not clear but the researchers believe that it will have an effect on temperatures, clouds, precipitation and large-scale circulation beyond the installed regions. "However, these non-local impacts or teleconnections are much less significant than we saw in the land cases," said Wang. "This is likely due to the much lower response of the ocean to the imposed surface-drag changes relative to the response of the land to the imposed changes in both surface roughness and displacement height."

Wang and Prinn also examined the issues of intermittency and hence reliability, of large-scale deployment of wind-driven electrical power generation by seasonally averaging the available wind power in various regions of the world. They found that intermittency would be a major issue for a power generation and distribution system that relies on the harvest of wind power from large-scale offshore wind farms.

"Intermittency presents a major challenge for power management, requiring solutions such as on-site energy storage, back-up generation and very long-distance power transmission for any electrical system dominated by offshore wind power," said Wang.

Wang is also keen to point out that the method he and Prinn used to simulate the offshore wind-turbine effect on the lower atmosphere involved simply increasing the ocean surface drag. "While this method is consistent with several detailed fine-scale simulations of wind turbines, it still needs further study to ensure its validity," he said.

Wang and Prinn published their work in Environmental Research Letters (ERL).
The report is available here.

 

Joint Program Logo
Texas Green Report (Sierra Club)

Dr. Tammy Thompson, postdoctoral associate from the MIT Joint Program for the Science and Policy of Global Change, will be speaking at a press conference held by the Texas Sierra Club at the Houston City Hall on Tuesday, July 19, 10:00am. Dr. Thompson wil be speaking on air pollution in Texas.

For the Sierra Club Press Release, click here.