CS3 In the News

Commentary
The Energy Collective

Update 7:30pm, Saturday, November 23
Decisions have been adopted on the three major elements of COP19: Durban Platform, Loss and Damage, and institutional mechanisms for finance.


Update 4:11pm, Saturday, November 23
The decision of the Durban Platform was adopted and the plenary to finalize other COP decisions on finance and loss and damage will commence soon.

 
Two weeks at the UN climate negotiations in Warsaw (COP19) have passed, and talks continued well into Saturday. As I wrote earlier this week, this conference was an important step on the way to Paris in 2015, when an international treaty for action on climate change post-2020 will be sought. Much reporting has focused on the acrimony between developed and developing countries, largely based on outstanding issues of development assistance and compensation for climate damages that were held by some as preconditions for success.  Into the late hours, unmet expectations from some developing countries pushed discussion on the prospective structure and timeline for the 2015 agreement into Saturday afternoon.

More...
 

Commentary
The Energy Collective

We are heading into the second week of UN climate negotiations in Warsaw (refered to as COP 19), where 194 countries are busy laying the groundwork for an international treaty hopefully to be concluded by 2015. Created under the auspices of the UN Framework Convention on Climate Change (UNFCCC), this would be a first-of-its-kind agreement to guide global action from 2020. A casual observer may recall similar rhetoric leading up to the Copenhagen climate summit that produced a three-page “accord” rather than the permanent treaty solution as promised. However, in Warsaw, several issues are on the table (and several others, crucially, off) that demonstrate the hardiness and progressive (though often glacial) march of multi-lateral environmental governance.

Detractors of the UN climate regime will question the effectiveness of a 20+ year process that functions (or not) on the consensus between 194 members who range from climate problems to climate victims. After all, the only legally-binding outcome to date – the Kyoto Protocol – did little to slow down emissions over its first commitment period (2008-2012) and even fewer have signed up for targets in the next period. Last week, Japan scaled back its 2020 targets, following on the heels of the Australian coalition government's announcement of intention to repeal their carbon tax, drawing the ire of virtually everyone.

At the same time, scholars have noted that outside the UNFCCC process there have been countless bi-lateral and multi-lateral initiatives such as between the US and China and the G20, and a proliferation of platforms collecting commitments from states/provinces, cities and corporations. These actors are arguably more important than many nation-states in terms of mitigation potential, yet do not have a seat at the table. UN Secretary-General Ban Ki-moon is hosting a climate summit for world leaders next year to push for greater commitments. Businesses, cities and others are expressly invited.

Nevertheless, I argue that one should watch carefully this treaty process, because despite the outward lack of progress, the UNFCCC will still help stimulate both pre-2020 and post-2020 actions on climate change. Further, it has significant implications for how the global community chooses to address other environmental issues and cross-cutting challenges such as poverty eradication.

First, the UN has exercised its convening power to leverage commitments from all major emitters, and can do this again. High expectations in advance of the Copenhagen climate summit in 2009 led the BASIC countries (Brazil, South Africa, India and China) to depart from tradition and announce significant 2020 targets. They were not obligated to do so by the Convention, which created a “firewall” between industrialized and developing countries, only the former having the requirement to mitigate. Despite the failure to achieve consensus at Copenhagen these commitments stuck, and were enshrined in the official outcome the following year. The US, not a signatory to the Kyoto Protocol, joined in committing to reductions.

Second, the legitimacy of the UNFCCC as the only forum dedicated to climate with official blessing from every country cannot be understated. Addressing climate change, whose costs are concentrated in a handful of industries in every country but whose benefits are diffuse and highly variable, is a classic free-rider problem, whereby one non-cooperative party enjoys the positive externalities of others’ sacrifices. A common front means greater international pressure on parties to fulfill their commitments.

Third, the UNFCCC has historically been a precedent-setting institution. Market mechanisms such as carbon offset trading were enshrined in the Kyoto Protocol, and as a result they have become a favored policy approach for domestic programs around the world. In the last several years, worries of feasibility of getting consensus have been pushing the talks toward a pledge-based system (perhaps like an iterative offer-and-review process), in lieu of the Kyoto-style top-down allocations of rights to the atmosphere. This had a defining influence on the Rio+20 Earth Summit in 2012 and will likely only increase in influence.

At the center of the negotiations has been a moral and legal debate surrounding equity: how to balance historical responsibility against current and future emissions limits. Given limited resources, in order to achieve the goal of the convention of preventing “dangerous anthropogenic interference with the climate system,” there will be inevitable trade-offs between efficiency and equity. The repercussions of this decision could easily become the basis for approaches to other multi-lateral issues and, more broadly, alter the structure of development assistance aid.

So, where do the Warsaw negotiations stand? The Durban Platform for Enhanced Action, the track responsible for crafting the treaty language and strengthening ambition, has released a draft decision text. It is the result of a week of tense negotiation that went late into the weekend, and is surprisingly concise. It acknowledges the tectonic shift mentioned above toward a diversity of actors and commitments and lays out basic elements of a treaty (though these will need to be filled out as the week progresses).

The state-run Chinese press have been explicit that long-term finance is the key to success at Warsaw (see, e.g., here and here). One of the commitments brought to Copenhagen was a pledge from industrialized countries to mobilize $100 billion / year by 2020 in aid to help developing countries mitigate and adapt to climate change. China wants more details on how that number is going to be achieved, including setting intermediate targets. This is particularly vexing issue since parties can’t agree on how much aid has already been distributed to meet a 2012 deadline (one analysis says the $30 billion target was reached, while developing countries claim it is mostly existing aid repurposed).

Following the disastrous Typhoon Bopha last year in the Philippines, countries and civil society rallied around a compensation scheme for “loss and damage” resulting from climate change. Typhoon Haiyan ripped through the same island chain just as the negotiations opened this year. Yeb Sano, lead negotiator for the Philippines, electrified the venue with his demand for an institutional fix (including a now worldwide fasting event and a 500k+ Avaaz petition), possibly raising the stakes for success at the conference.

Much work still remains to convince developing countries that the evolving approaches within the UN (“offer-and-review”, voluntary commitments from subnational actors, and the gradual eroding of the 1992 “firewall” between developed and developing countries) are in their best interests. Success at Warsaw will likely hinge on industrialized countries delivering on promises of finance, while generating trust with other parties that more ambitious pledges on mitigation will follow.

energy
In The News
MIT News

In deciding how best to meet the world’s growing needs for energy, the answers depend crucially on how the question is framed. Looking for the most cost-effective path provides one set of answers; including the need to curtail greenhouse-gas emissions gives a different picture. Adding the need to address looming shortages of fresh water, it turns out, leads to a very different set of choices.

That’s one conclusion of a new study led by Mort Webster, an associate professor of engineering systems at MIT, published in the journal Nature Climate Change. The study, he says, makes clear that it is crucial to examine these needs together before making decisions about investments in new energy infrastructure, where choices made today could continue to affect the water and energy landscape for decades to come.

The intersection of these issues is particularly critical because of the strong contribution of the electricity-generation industry to overall greenhouse-gas emissions, and the strong dependence of most present-day generating systems on abundant supplies of water. Furthermore, while power plants are a strong contributor to climate change, one expected result of that climate change is a significant change of rainfall patterns, likely leading to regional droughts and water shortages.

Surprisingly, Webster says, this nexus is a virtually unexplored area of research. “When we started this work,” he says, “we assumed that the basic work had been done, and we were going to do something more sophisticated. But then we realized nobody had done the simple, dumb thing” — that is, looking at the fundamental question of whether assessing the three issues in tandem would produce the same set of decisions as looking at them in isolation.

The answer, they found, was a resounding no. “Would you build the same things, the same mix of technologies, to get low carbon emissions and to get low water use?” Webster asks. “No, you wouldn’t.”

In order to balance dwindling water resources against the growing need for electricity, a quite different set of choices would need to be made, he says — and some of those choices may require extensive research in areas that currently receive little attention, such as the development of power-plant cooling systems that use far less water, or none at all.

Even where the needed technologies do exist, decisions on which to use for electricity production are strongly affected by projections of future costs and regulations on carbon emissions, as well as future limits on water availability. For example, solar power is not currently cost-competitive with other sources of electricity in most locations — but when balanced against the need to reduce emissions and water consumption, it may end up as the best choice, he says.

“You need to use different cooling systems, and potentially more wind and solar energy, when you include water use than if the choice is just driven by carbon dioxide emissions alone,” Webster says.

His study focused on electricity generation in the year 2050 under three different scenarios: purely cost-based choices; with a requirement for a 75 percent reduction in carbon emissions; or with a combined requirement for emissions reduction and a 50 percent reduction in water use.

To deal with the large uncertainties in many projections, Webster and his co-authors used a mathematical simulation in which they tried 1,000 different possibilities for each of the three scenarios, varying each of the variables randomly within the projected range of uncertainty. Some conclusions showed up across hundreds of simulations, despite the uncertainties.

Based on cost alone, coal would generate about half of the electricity, whereas under the emissions-limited scenario that would drop to about one-fifth, and under the combined limitations, it would drop to essentially zero. While nuclear power would make up about 40 percent of the mix under the emissions-limited scenario, it plays almost no role at all in either the cost-alone or the emissions-plus-water scenarios.

“We’re really targeting not just policymakers, but also the research community,” Webster says. Researchers “have thought a lot about how do we develop these low-carbon technologies, but they’ve given much less thought to how to do so with low amounts of water,” he says.

While there has been some study of the potential for air-cooling systems for power plants, so far no such plants have been built, and research on them has been limited, Webster says.

Now that they have completed this initial study, Webster and his team will look at more detailed scenarios about “how to get from here to there.” While this study looked at the mix of technologies needed in 2050, in future research they will examine the steps needed along the way to reach that point.

“What should we be doing in the next 10 years?” he asks. “We have to look at the implications all together.”

In addition to Webster, the work was carried out by graduate student Pearl Donohoo and recent graduate Bryan Pelmintier, of the MIT Engineering Systems Division. The work was supported by the National Science Foundation, the U.S. Department of Energy, and the Martin Family Foundation.

colab
In The News
MIT News

Watch video from the conference here: www.climatecolab.org/conference2013/virtually

As international climate negotiators meet this month for the 19th meeting of the UN Conference of Parties (COP19), there’s an increasing realization that top-down efforts to confront climate change aren’t working—or at least, they’re not working quickly enough.  Are there ways that large groups of people—even a global community—could work together to take action now?  That was the focus of an MIT climate conference last week, entitled “Crowds and Climate.”
 
“We know how to make real progress on climate change, what we must create is the political will to achieve it. Creating that will requires all of us to engage. It can’t be a top-down process,” said Environmental Defense Fund President Fred Krupp, the Wednesday keynote speaker at the event. “The arch of the moral universe may bend toward justice, but the line on the graph of global emissions won’t bend until we make it do so.”
 
To help bend that line downward and develop creative innovative ideas to take action on climate change, the conference explored the role new technology-enabled approaches—such as crowdsourcing, social media and big data—could play. It was sponsored by the MIT Center for Collective Intelligence’s Climate CoLab, and co-sponsored by the MIT Energy Initiative, MIT Joint Program on the Science and Policy of Global Change, and MIT Sloan Sustainability.
 
The conference underscored the aim of the Climate CoLab itself, which looks to shift public engagement on climate change from one of just science or ideology to a much broader and more inclusive participation. It does so by crowdsourcing, through contests, citizen-generated ideas on a range of topics relevant to climate change. The community has been doubling or tripling with each annual contest, and there are now more than 10,500 registered members.  The community’s rapid growth shows that there are many smart and creative people around the globe ready to engage in these issues.
 
“By bringing together experts, policy makers, business people, and many others, we hope the Climate CoLab can help plan—and gain support for—better climate actions than anything we humans would otherwise have done,” said the Director of the MIT Center for Collective Intelligence and Principal Investigator for the Climate CoLab, Prof. Thomas Malone. “Even if we don’t achieve this, engaging crowds in the process will likely increase support for and awareness of the solutions.”
 
In addition to honoring the 28 winners of this year’s contests, and awarding a Grand Prize winner, the conference gathered a large group of smart and creative people to discuss these issues. The two-day event attracted more than 800 in-person and online attendees.
 
Organizing Crowds
 
Using a bottom-up approach to confront climate change first requires building a base, according to Marshall Ganz, senior lecturer in public policy at Harvard University. “What drives movements is not branding, but relationships; …building constituencies at the base,” he said.
 
To build a base, the crowd needs to be able to relate, said Andrew Hoffman, Holcim (U.S.) Professor of Sustainable Enterprise at the University of Michigan.  “As long as it’s this idea in the ivory tower presented by a scientist using a language people don’t understand, they will still question whether they believe that theory or not,” Hoffman said.
 
Instead, climate change should be tied to things that resonate with people, for example, the rising cost of insurance,  “Talk about the ski season to skiers, or about habitat changes to hunters,” said Kate Gordon, Vice President at Next Generation.
 
Inserting unexpected validators also helps. Bob Inglis, a former Republican congressman from South Carolina and the founder of the Energy and Enterprise Initiative at George Mason University, pointed out that business people can act as good validators, “When smart money starts moving that way, that has a real educational impact on people.”
 
Ganz and Hoffman both noted that outside factors—such as events—can shift the conversation toward a desired social change.  Hoffman used Hurricane Katrina and Superstorm Sandy as examples.
 
“Sandy created a fundamental change in how we think about climate change, Katrina did not,” Hoffman said. “Katrina hit a minority population that was fundamentally disconnected and had no spokespeople. Sandy hit an affluent population that was politically connected and had Michael Bloomberg.”
 
Sergej Mahnovski, the director of New York City’s Long Term Planning and Sustainability, backed up this idea that one event can spur change. He noted that New York City’s sustainability planning was going slowly until Sandy hit and brought a sense of urgency amongst the people of New York, and thus, their leaders.
 
While it may take time to create a crowd, David Yarnold, president of the National Audubon Society, points out that there is a precedence of social movements creating environmental change. The first Earth Day was one of the largest grassroots demonstrations in the nation’s history, and it was followed by the creation of the U.S. Environmental Protection Agency and the passage of the Clean Air Act.
 
“There is a precedence in the U.S. in terms of environmental and conservation action that demonstrate that when the American people decide that something is a priority, they’ll actually do it,” Yarnold said.
 
Tools for Change
 
In the age of Twitter and iPads, technological advances can be used to help create a community and expand their impact. This is as true in the context of climate change as it is in almost any global challenge today.
 
“In the past, changes had figureheads,” said Practically Green CEO Susan Hunt Stevens. “With Arab Spring, there were some players, but social media ended up being the figurehead… it can be used to bring visibility to the change and connect people.”
 
Other nontraditional forms of media have aided the climate cause as well.
 
Pace University Senior Fellow and New York Times Dot Earth Blogger Andrew Revkin gave many examples – including a high school science teacher who posts simple explanatory videos on YouTube for all to watch.
 
“Just building the capacity for the exchange of ideas across boundaries is really important and I think it can lead to some surprising results,” Revkin said.
 
Crowds can also help compile big data and turn it into useful information, said Bina Venkataraman, President Obama’s senior advisor on climate change innovation.  In addition, she noted crowds play an important role in validating what data is useful and what is not.
 
But while crowds can validate data, we also need ways to validate crowds, according to MIT Sloan School of Management Professor John Sterman. “If you just have the crowd and you don’t have the ability to test the ideas, nothing’s going to work… learning’s much slower.”
 
Sterman directs Climate Interactive, an organization that has created a simple climate model that anyone can run on their laptops. He has embedded his model into the Climate CoLab contest so that the crowds’ ideas can be tested instantly using the models to see which ideas work best.
 
“Our goal is to test whether that actually amplifies the rate at which we can come up with excellent solutions,” Sterman said. “So you have this combination of the talents and wisdom of the crowds with the testing platform that’s enabled by the simulation models that you can run instantly.”
 
Sterman concluded that the Climate CoLab’s work, and conferences like this one, is needed to further the climate debate and help bring about solutions.
 
“Climate science needs to continue. But it’s not the bottleneck to progress,” Sterman said. “We need more research, and I think the Climate CoLab enables this research on… how that turns a crowd into a movement and into organizations that can make a difference.”
 
Anyone from around the world is invited to submit ideas to the Climate CoLab, which is now receiving proposals.  Keeping with the crowdsourcing orientation, community members are invited to suggest contests they think can make a difference at http://climatecolab.org.
 
Watch video from the conference here: www.climatecolab.org/conference2013/virtually

Cargill
In The News
Cargill

MIT students and faculty interested in climate change’s impact on agriculture and food production gathered on the University’s campus Tuesday to hear a half-day of dialogue in a Food Symposium sponsored by the MIT Joint Program on the Science and Policy of Global Change. In his keynote address, Cargill’s Vice Chairman Paul Conway said that when faced with the potential impact of climate change one should think about the Stockdale Paradox and confront brutal reality at the same time one remains optimistic. He also said that climate change was a critical issue for the company and understanding it would help Cargill adapt and deliver on its strategy to be balanced, diverse and resilient.

Conway’s remarks focused on the trends impacting food security, including urbanization, changing diets and biofuels. He fielded questions about how intensification can be done in a sustainable way and how Cargill could help reduce food waste.

The topic of climate change and its potential impact on food production is timely, as the Intergovernmental Panel on Climate Change (IPCC) is expected to issue in March its latest assessment of the impact of rising temperature and climate variability on the global food system. The report was leaked last week, and the New York Times featured a story about the potential risks to food supplies.

Other panelists included Thomas Hertel, professor of agricultural economics at Purdue University, who discussed the impact of population, impact of income growth on food prices, and the impact of rising temperature on yields for wheat and maize. Hertel concluded that agricultural productivity growth holds the key to protecting the environment and improving food security, but climate change will make this harder. And he believes that rising incomes and declining real prices offer the prospect of reducing the number of malnourished in the world significantly.

Cargill has been a sponsor of the MIT Joint Program on the Science and Policy of Climate Change since 2008. MIT’s program is one of several university programs Cargill funds that are helping the company better understand climate science, impact on crop yields, adaptability, sustainability and implications for food security.

In The News
ClimateWire

Julia Pyper, E&E reporter
Published: Monday, November 5, 2013


Targets set in China's latest five-year plan aimed at reducing harmful air pollution from the burning of fossil fuels will result in unintended reductions in carbon dioxide emissions that exceed the country's near-term CO2 reduction goals, according to a new study by the Massachusetts Institute of Technology.

China's rapid economic development has led to a spike in pollution from power plants and the industrial sector. To address pollutants that cause acid rain and smog, the government of China established regulations under the 12th five-year plan to reduce emissions of sulfur dioxide (SO2) by 8 percent and nitrogen oxides (NOx) by 10 percent below 2010 levels by 2015.

The plan also calls for a 17 percent reduction in carbon dioxide intensity from 2010 levels.

In a study published this week in the journal Global Environmental Change, MIT researchers found that China's SO2 and NOx regulations alone could reduce carbon dioxide intensity by 20 percent, surpassing its official near-term target.

"It is clear that the unintended benefits of the regulations are substantial and would allow the government to improve air quality, while also cutting the most potent greenhouse gas. We have also learned that carbon emissions may not rise as high as some forecasts suggest, if concerns about conventional pollutants lead to air-pollution reduction policies," said Kyung-Min Nam, lead author of the study and a postdoctoral associate with the MIT Joint Program on the Science and Policy of Global Change.

Carbon dioxide and conventional air pollutants, like SO2 and NOx, come from the same sources, most often the combustion of fossil fuels. Switching from coal to natural gas or reducing energy use in response to air pollution constraints, therefore, will also reduce carbon dioxide emissions.

It will reduce costs, too. MIT researchers estimate that China's SO2 and NOx regulations will produce $3 billion in reduced compliance costs with carbon dioxide intensity targets.

"Considering that the current CO2 target can be attained by the secondary benefits of the SO2 and NOx regulations, policymakers would do well to coordinate the regulatory efforts," said Valerie Karplus, a co-author of the study and the director of the MIT Joint Program's China Energy and Climate Project, in a statement. "Currently, they have room to adopt more ambitious CO2 reduction policies, but the impact will hinge on effective implementation."

The study finds that China could meet its near-term carbon and pollution goals while still expanding the use of coal. However, this path will have a lock-in effect that makes it more expensive to cut down emissions in future. China can achieve cost savings for industries by requiring earlier action and investment in long-term emissions targets, according to the authors.

"China's local air pollution is extremely serious, and the cost of not acting is too big to ignore," Nam added.

In most of China's major cities, particulate matter concentration levels exceed the World Health Organization's guideline levels by a huge margin, he said. Many studies estimate that health damage from the excess pollution in China is substantial, reaching between 4 and 7 percent of gross domestic product.

"So while reducing pollution in China may require non-trivial compliance costs, it will also increase the welfare of local citizens substantially," Nam said.

Nuclear Power
In The News
CNN

Editor's note: Climate and energy scientists James Hansen, Ken Caldeira, Kerry Emanuel and Tom Wigley released an open letter Sunday calling on world leaders to support development of safer nuclear power systems.

(CNN) -- To those influencing environmental policy but opposed to nuclear power:

As climate and energy scientists concerned with global climate change, we are writing to urge you to advocate the development and deployment of safer nuclear energy systems. We appreciate your organization's concern about global warming, and your advocacy of renewable energy. But continued opposition to nuclear power threatens humanity's ability to avoid dangerous climate change.

We call on your organization to support the development and deployment of safer nuclear power systems as a practical means of addressing the climate change problem. Global demand for energy is growing rapidly and must continue to grow to provide the needs of developing economies. At the same time, the need to sharply reduce greenhouse gas emissions is becoming ever clearer. We can only increase energy supply while simultaneously reducing greenhouse gas emissions if new power plants turn away from using the atmosphere as a waste dump.

Renewables like wind and solar and biomass will certainly play roles in a future energy economy, but those energy sources cannot scale up fast enough to deliver cheap and reliable power at the scale the global economy requires. While it may be theoretically possible to stabilize the climate without nuclear power, in the real world there is no credible path to climate stabilization that does not include a substantial role for nuclear power.

We understand that today's nuclear plants are far from perfect. Fortunately, passive safety systems and other advances can make new plants much safer. And modern nuclear technology can reduce proliferation risks and solve the waste disposal problem by burning current waste and using fuel more efficiently. Innovation and economies of scale can make new power plants even cheaper than existing plants. Regardless of these advantages, nuclear needs to be encouraged based on its societal benefits.

Quantitative analyses show that the risks associated with the expanded use of nuclear energy are orders of magnitude smaller than the risks associated with fossil fuels. No energy system is without downsides. We ask only that energy system decisions be based on facts, and not on emotions and biases that do not apply to 21st century nuclear technology.

While there will be no single technological silver bullet, the time has come for those who take the threat of global warming seriously to embrace the development and deployment of safer nuclear power systems as one among several technologies that will be essential to any credible effort to develop an energy system that does not rely on using the atmosphere as a waste dump.

With the planet warming and carbon dioxide emissions rising faster than ever, we cannot afford to turn away from any technology that has the potential to displace a large fraction of our carbon emissions. Much has changed since the 1970s. The time has come for a fresh approach to nuclear power in the 21st century.

We ask you and your organization to demonstrate its real concern about risks from climate damage by calling for the development and deployment of advanced nuclear energy.

Sincerely,

Dr. Ken Caldeira, Senior Scientist, Department of Global Ecology, Carnegie Institution

Dr. Kerry Emanuel, Atmospheric Scientist, Massachusetts Institute of Technology

Dr. James Hansen, Climate Scientist, Columbia University Earth Institute

Dr. Tom Wigley, Climate Scientist, University of Adelaide and the National Center for Atmospheric Research

In The News
ClimateWire

Julia Pyper, E&E reporter

A national emissions trading system in China, placed over the existing patchwork of provincial climate markets, would be the most cost-effective way for the world's top emitter to reduce its carbon footprint, according to a new report by researchers at the Massachusetts Institute of Technology and Tsinghua University in Beijing.

China took a major step forward in combating climate change earlier this year by launching a set of pilot greenhouse gas trading systems within several of its provinces. This is the first in a series of reforms that will help China meet its larger aim to reduce carbon intensity by 17 percent in 2015, relative to 2010 levels.

Ultimately, China's goal is to reduce its carbon dioxide equivalent emissions per unit of gross domestic product by 40 to 45 percent between 2005 and 2020.

But while setting targets at the provincial level is an effective way to cut emissions, a national target would result in a 20 percent lower cost while achieving the same goal, according to the MIT-Tsinghua study published today in the journal Energy Economics.

"If you set targets at the level of individual provinces, there's no opportunity, for example, for one province where the cost of reducing emissions is higher to do less and another province where the cost of reducing emissions is lower to do more," said Valerie Karplus, director of the Tsinghua-MIT China Energy and Climate Project and an author of the study. "A national system achieves the same reduction at lower cost."

Opportunities to reduce emissions in China's eastern provinces are generally more expensive than in western ones, where improving the efficiency of coal-fired power plants, for instance, is relatively cheap, Karplus explained. A national system would improve the distribution of burden among provinces by allowing for exchanges between them.

"To make current approaches increasingly cost-effective, and tailored to meet both ... long-term climate goals as well as near-term air pollution targets, the next step is to move to a national emissions trading system," she said.

Alternative to direct regulation

Harvard University economist Robert Stavins said it's no surprise that aggregating provinces in a single system is more cost-effective than regulating several geographic areas separately. In fact, he would have expected the benefit to be greater than 20 percent.

The Chinese government, eager to reduce its environmental footprint while continuing to grow, has already expressed its intent to move away from siloed provincial policies toward a national system at some point after 2015. The same thing could happen in the U.S., too, said Stavins, who was one designer of the U.S. cap-and-trade system that failed to pass Congress in 2009.

In the absence of a national policy, some parts of the U.S. have established subnational systems, such as California's cap-and-trade program and the Regional Greenhouse Gas Initiative in the Northeast. As in China, the U.S. could eventually move toward a unified pricing policy, Stavins said.

International linkages among early movers are already starting to form, he said. California and Quebec joined forces this year, and Australia is planning to partner with nascent carbon markets in New Zealand and South Korea.

While many countries face political resistance to putting a price on carbon, it remains the single most effective way to address climate change, Stavins said.

"In an economy such as China or the U.S. -- in the industrialized world as well as in the major emerging economies -- the only policy approach that is feasible to achieve truly significant emissions reductions is going to be some kind of carbon pricing regime, whether that is done through a carbon tax or a cap-and-trade system," he said.

"It's inconceivable that direct regulation for these hundreds of millions of different sources of carbon dioxide emissions and, more broadly, greenhouse gas emissions ... could ever be achieved," he added.

In The News
MIT News

Peter Dizikes, MIT News Office

If you have stopped at a gas station recently, there is a good chance your auto has consumed fuel with ethanol blended into it. Yet the price of gasoline is not substantially affected by the volume of its ethanol content, according to a paper co-authored by an MIT economist. The study seeks to rebut the claim, broadly aired over the past couple of years, that widespread use of ethanol has reduced the wholesale cost of gasoline by $0.89 to $1.09 per gallon.

Whatever the benefits or drawbacks of ethanol, MIT’s Christopher Knittel says, price issues are not among them right now.

“The point of our paper is not to say that ethanol doesn’t have a place in the marketplace, but it’s more that the facts should drive this discussion,” says Knittel, the William Barton Rogers Professor of Energy and a professor of applied economics at the MIT Sloan School of Management.

The 10 percent solution?

The vast majority of ethanol sold in the United States is made from corn. It now constitutes 10 percent of U.S. gasoline, up from 3 percent in 2003.

It is another matter, however, whether that increase in ethanol content produces serious savings at the pump, as some claim. Knittel and his co-author, economist Aaron Smith of the University of California at Davis, contest such an assertion in their paper, which is forthcoming inThe Energy Journal, a peer-reviewed publication in the field.

The claim that ethanol lowers prices derives from a previous study on the issue, which Knittel and Smith believe is problematic. That prior work involves what energy economists call the “crack ratio,” which is effectively the price of gasoline divided by the price of oil.

The crack ratio is something energy analysts can use to understand the relative value of gasoline compared to oil: The higher the crack ratio, the more expensive gasoline is in relative terms. If ethanol were a notably cheap component of gasoline production, its increasing presence in the fuel mix might reveal itself in the form of a decreasing crack ratio.

So while gasoline is made primarily from oil, there are other elements that figure into the cost of refining gasoline. Thus if oil prices double, Knittel points out, gasoline prices do not necessarily double. But in general, when oil prices — as the denominator of this fraction — go up, the crack ratio itself falls.

The previous work evaluated time periods when oil prices rose, and the percentage of ethanol in gasoline also rose.

But Knittel and Smith assert that the increased proportion of ethanol in gasoline merely correlated with the declining crack ratio, and did not contribute to it in any causal sense. Instead, they think that changing oil prices drove the change in the crack ratio, and that when those prices are accounted for, the apparent effect of ethanol “simply goes away,” as Knittel says.

To further illustrate that the previous study was touting a correlation, not a causal relationship, Knittel and Smith conducted what are known in economics literature as “antitests” of that study’s model. By inserting unconnected dependent variables into the model, they found that the model also produced a strong correlation between ethanol content in gasoline and, for instance, U.S. employment figures — although the latter are clearly unrelated to the composition of gasoline.

The previous work also claimed that if ethanol production came to an immediate halt, gasoline prices would rise by 41 to 92 percent. But Knittel does not think that estimate would bear out in such a scenario.

“In the very short run, if ethanol vanished tomorrow, we would be scrambling to find fuel to cover that for a week, or less than a month,” Knittel says. “But certainly within a month, increases in imports would relax or reduce that price impact.”

Informing the debate

The differing assessments of ethanol’s impact have garnered notice among economists and energy policy analysts. Scott Irwin, an economist at the University of Illinois at Urbana-Champaign who has read the paper, calls it a “convincing and compelling” rebuttal to the idea that expanding ethanol content in gasoline drastically lowers prices.

“The paper dispensed once and for all with that conclusion,” Irwin says. Still, he adds, there remains an open debate about the marginal effects of ethanol content in gasoline, and more empirical work on the subject would be useful.

“A case can be made that it can be a positive few cents,” Irwin says, adding that “reasonable arguments can be made on either side of zero” regarding ethanol’s price impact. In either case, Irwin says, his view is that the effect is currently a small one.

Knittel has posted, on his MIT Sloan web page, a multipart exchange between himself and Dermot Hayes, an Iowa State University economist who is a co-author of the prior work. After an initial finding that ethanol reduced gasoline prices by $0.25 per gallon, Hayes and a co-author produced follow-up studies, examining about a decade after 2000, and arrived at the figures of $0.89 and $1.09 per gallon, which gained wider public traction. 

Knittel acknowledges that policy decisions about gasoline production are driven by a complex series of political factors, and says his study is not intended to directly convey any policy preferences on his part. Still, he suggests that even ethanol backers in policy debates have reason to keep examining its value.

“Making claims about the benefits of ethanol that are overblown is only going to set up policymakers for disappointment,” Knittel says.

LA Times
In The News
MIT News Office

Peter Dizikes, MIT News Office

The structure of the auditing business appears problematic: Typically, major companies pay auditors to examine their books under the so-called “third-party” audit system. But when an auditing firm’s revenues come directly from its clients, the auditors have an incentive not to deliver bad news to them.

So: Does this arrangement affect the actual performance of auditors?

In an eye-opening experiment involving roughly 500 industrial plants in the state of Gujarat, in western India, changing the auditing system has indeed produced dramatically different outcomes — reducing pollution, and more generally calling into question the whole practice of letting firms pay the auditors who scrutinize them.

“There is a fundamental conflict of interest in the way auditing markets are set up around the world,” says MIT economist Michael Greenstone, one of the co-authors of the study, whose findings are published today in the Quarterly Journal of Economics. “We suggested some reforms to remove the conflict of interest, officials in Gujarat implemented them, and it produced notable results.”

The two-year experiment was conducted by MIT and Harvard University researchers along with the Gujarat Pollution Control Board (GPCB). It found that randomly assigning auditors to plants, paying auditors from central funds, double-checking their work, and rewarding the auditors for accuracy had large effects. Among other things, the project revealed that 59 percent of the plants were actually violating India’s laws on particulate emissions, but only 7 percent of the plants were cited for this offense when standard audits were used.

Across all types of pollutants, 29 percent of audits, using the standard practice, wrongly reported that emissions were below legal levels. 

The study also produced real-world effects: The state used the information to enforce its pollution laws, and within six months, air and water pollution from the plants receiving the new form of audit were significantly lower than at plants assessed using the traditional method.

The co-authors of the paper are Greenstone, the 3M Professor of Environmental Economics at MIT; Esther Duflo, the Abdul Latif Jameel Professor of Poverty Alleviation and Development Economics at MIT; Rohini Pande, a professor of public policy at the Harvard Kennedy School; and Nicholas Ryan PhD ’12, now a visiting postdoc at Harvard.

The power of random assignment

The experiment involved 473 industrial plants in two parts of Gujarat, which has a large manufacturing industry. Since 1996 the GPCB has used the third-party audit system, in which auditors check air and water pollution levels three times annually, then submit a yearly report to the GPCB.

To conduct the study, 233 of the plants tried a new arrangement: Instead of auditors being hired by the companies running the power plants, the GPCB randomly assigned them to plants in this group. The auditors were paid fixed fees from a pool of money; 20 percent of their audits were randomly chosen for re-examination. Finally, the auditors received incentive payments for accurate reports.

In comparing the 233 plants using the new method with the 240 using the standard practice, the researchers uncovered that almost 75 percent of traditional audits reported particulate-matter emissions just below the legal limit; using the randomized method, only 19 percent of plants fell in that narrow band.

All told, across several different air- and water-pollution measures, inaccurate reports of plants complying with the law dropped by about 80 percent when the randomized method was employed.

The researchers emphasize that the experiment enabled the real-world follow-up to occur.

“The ultimate hope with the experiment was definitely to see pollution at the firm level drop,” Duflo says. The state’s enforcement was effective, as Pande explains, partly because “it becomes cheaper for some of the more egregious pollution violators to reduce pollution levels than to attempt to persuade auditors to falsify reports.”

According to Ryan, the Gujarat case also dispels myths about the difficulty of enforcing laws, since the experiment “shows the government has credibility and will.” 

But how general is the finding?

In the paper, the authors broaden their critique of the audit system, referring to standard corporate financial reports and the global debt-rating system as other areas where auditors have skewed auditing incentives. Still, it is an open question how broadly the current study’s findings can be generalized.

“It would be a mistake to assume that quarterly financial reports for public companies in the U.S. are exactly the same as pollution reports in Gujarat, India,” Greenstone acknowledges. “But one thing I do know is that these markets were all set up with an obvious fundamental flaw — they all have the feature that the auditors are paid by the firms who have a stake in the outcome of the audit.”

Some scholars of finance say the study deserves wide dissemination.

“This is a wonderful paper,” says Andrew Metrick, a professor and deputy dean at the Yale School of Management. “It is a very strong piece of evidence that, in the context they studied, random assignment produces unbiased results. And I think it’s broadly applicable.”

Indeed, Metrick says he may make the paper required reading in a new program Yale established this year that provides research and training for financial regulators from around the world.

To be sure, many large corporations have complicated operations that cannot be audited in the manner of emissions; in those cases, a counterargument goes, retaining the same auditor who knows the firm well may be a better practice. But Metrick suggests that in such cases, auditors could be randomly assigned to firms for, say, five-year periods. At a minimum, he notes, the Dodd-Frank law on financial regulation mandates further study of these issues.

Greenstone also says he hopes the current finding will spur related experiments, and gain notice among regulators and policymakers.

“No one has really had the political will to do something about this,” Greenstone says. “Now we have some evidence.”

The study was funded by the Center for Energy and Environmental Policy Research, the Harvard Environmental Economics Program, the International Growth Centre, the International Initiative for Impact Evaluation, the National Science Foundation and the Sustainability Science Program at Harvard.

In The News
MIT Earth, Atmospheric and Planetary Sciences

The Sverdrup Gold Medal is "granted to researchers who make outstanding contributions to the scientific knowledge of interactions between the oceans and the atmosphere." The award, in the form of a medallion, will be presented at the AMS Annual Meeting to be held on 2–6 February 2014 in Atlanta, GA.

John Marshall is an oceanographer with broad interests in climate and the general circulation of the atmosphere and oceans, which he studies through the development of mathematical and numerical models of physical and biogeochemical processes. His research has focused on problems of ocean circulation involving interactions between motions on different scales, using theory, laboratory experiments, and observations as well as innovative approaches to global ocean modeling pioneered by his group at MIT.

Current research foci include: ocean convection and subduction, stirring and mixing in the ocean, eddy dynamics and the Antarctic Circumpolar Current, the role of the ocean in climate, climate dynamics, aquaplanets.

Professor Marshall received his PhD in atmospheric sciences from Imperial College, London in 1980. He joined EAPS in 1991 as an associate professor and has been a professor in the department since 1993. He was elected a Fellow of the Royal Society in 2008. He is coordinator of Oceans at MIT, a new umbrella organization dedicated to all things related to the ocean across the Institute, and director of MIT’s Climate Modeling Initiative (CMI)

Commentary
The Energy Collective

China’s deployment of renewable electricity generation – starting with hydropower, then wind, and now biomass and solar – is massive. China leads the world in installed renewable energy capacity (both including and excluding hydro) and has sustained annual wind additions in excess of 10 gigawatts (10 GW) for four straight years. Half of the hydropower installed worldwide last year was in China. And solar and biomass-fired electricity are expected to grow ten-fold over the period 2010-2020. Most striking amidst all these impressive accomplishments has been the Chinese government’s seemingly unwavering financial support for renewable energy generators even as other countries scale back or restructure similar support programs.

The balance sheets of the central renewable energy fund are changing, however. Supplied primarily through a fixed surcharge on all electricity purchases, it has faced increasing shortfalls in recent years as renewable growth picked up, which may have contributed to late or non-payment to generators. Especially as more costly solar comes online, both the revenue streams and subsidy outlays to generators will require difficult modifications to keep the fund solvent. More broadly, investment decisions are largely influenced by the historically high penetration of state-owned energy companies in the renewables sector, which have responsibilities to the state besides turning a profit.

Recognizing these challenges of solvency and efficiency, the central government is facing a crossroads in its policy support for renewable sector, of which one possible approach would be migrating to a hybrid system of generation subsidies coupled with mandatory renewable portfolio standards (RPS). This fourth and final post in the Transforming China’s Grid series looks out to 2020 at how China’s renewable energy policies may evolve and how they must evolve to ensure strong growth in the share of renewable energy in the power mix.

Policy Support to Date

Investment in renewable energy has risen steadily in China over the last decade, with the wind and solar sectors hitting a record $68 billion in 2012, according to Bloomberg New Energy Finance (BNEF). These sums – together with massive state-led investments in hydropower – have translated into a surge of renewable energy capacity, which since 2006 included annual wind capacity additions of 10-15 GW and a near doubling of hydropower (see graph). Renewables now provide more than a quarter of China’s electricity generating capacity.

Renewable energy capacity in China, 1996-2012

Early on in both the wind and solar sectors, the tariffs paid to generators were determined by auction in designated resource development areas (called concessions). These auctions underwent a number of iterations to get at rates the market will bear before policy support was transitioned to the fixed regional feed-in-tariffs currently in place: 0.51-0.61 yuan / kWh (8.3-10.0 US¢ / kWh) for wind, and 0.90-1.00 yuan / kWh (15-16 US¢ / kWh) for solar. The result of this methodical policy evolution was the steady growth of wind and solar power capacity year-after-year. Contrast these with the uneven capacity additions of wind in the U.S., attributable to the haphazard boom-bust cycles in U.S. wind policy (see graph). Hydropower project planning is directed by the government and rates are set project-by-project (typically lower than the wind or solar FITs).

Wind capacity in China, US, 2001-2012

Also important to developers – thought not captured in BNEF’s investment totals – are reduced value-added-taxes on renewable energy projects, preferential land and loan terms, as well as significant transmission projects serving renewable power bases socialized across all ratepayers. On the manufacturing side, the government has also stepped in to prop up and consolidate key solar companies.

Guiding these policies has been continued ratcheting up of capacity targets beginning with the Medium to Long-Term Renewable Energy Plan in 2007. These national goals – while not legally binding – shape sectoral policies and encourage local officials to go the extra mile in support of these types of projects. The most recent iterations call for 104 GW of wind, 260 GW of hydro, and 35 GW of solar installed and grid-connected by 2015 (see table). In addition to these “soft” pushes, generators with over 5 GW of capacity were required under the 2007 plan to reach specified capacity targets for non-hydro renewables: 3% by 2010 and 8% by 2020. However, there appeared to be no penalty for non-compliance: half of the companies missed their 2010 mandatory market share targets.

China’ renewable energy targets as of September 2013
(GW, grid-connected)

 

2012 Actuala

2015 Goal

2020 Goal

   Windb

62

104
- 99 onshore
- 5 offshore

200
- 170 onshore
- 30 offshore

   Hydroc

249
- 20 pumped hydro

290
- 30 pumped hydro

420
- 70 pumped hydro

   Solard

3

35e

50

   Biomass

4

13f

30g

Sources
 

Rubber Missing the Road in Generation

Amidst the backdrop of impressive capacity additions, a separate story has unfolded with respect to generation. Wind in China faces twin challenges of connection and curtailment, as I outlined previously, which result in much lower capacity factors than wind turbines abroad. These have persisted for several years, so one might think that wise developers would demand higher tariffs before investing and a new, lower equilibrium would be established.

But the incentives to invest in China’s power sector are rarely based on economics alone. The vast majority of wind projects are developed by larger, state-owned enterprises (SOEs). In recent years, SOEs have been responsible for as much as 90% of wind capacity installed (for comparison, SOE’s are responsible for an average of 70% for the overall power mix). In 2011, the top 10 wind developers were all SOEs which faced some scrutiny under the 2010 mandatory share requirements because of their size. In addition, because generators only faced a capacity requirement, it was more important to get the turbines in the ground than get them spinning right away (though as we saw, many still missed their capacity targets). Grid companies, on the other hand, had generation targets (1% by 2010 and 3% by 2020), which were also unmet in some locations. The next round of policies have sought to address both generation and connection issues.

Other Cracks in the Support Structure

Though generation lagged capacity, it was still growing much faster than predicted, leading to shortfalls in funds to pay the feed-in-tariff. A single surcharge on all electricity purchases supplies the centrally-administered renewable energy fund, which fell short by 1.4 billion yuan ($200 million) in 2010 and 22 billion yuan ($3.4 billion) in 2011. Prior to the recent surcharge rise, some estimated the shortfall will rise to 80 billion yuan ($14 billion) by 2015. The difference would either not make it to developers or have to be appropriated from elsewhere.

In addition, from 2010-2012, there were long delays in reimbursing generators their premium under the FIT. The situation was so serious that the central planning ministry, the National Development and Reform Commission (NDRC), put out a notice in 2012 demanding grid companies pay the two-year old backlog. These receivables issues are particularly damaging to wind developers who operate on slim margins and need equity to invest in new projects.

To address the solvency of the renewable energy fund, in August, the NDRC doubled the electricity surcharge on industrial customers to 0.015 yuan / kWh (0.25 US¢ / kWh), keeping the residential and agriculture surcharge at 0.008 yuan / kWh (0.13 US¢ / kWh) (Chinese announcement). With a little over three-quarters of electricity going to industry, this will increase substantially the contributions to the fund. At the same time, solar FITs were scaled back slightly by instituting a regional three-tier system akin to that developed for wind: sunny but remote areas in the north and northwest offer 0.90-0.95 yuan / kWh (15-15.5 US¢ / kWh) while eastern and southern provinces close to load centers but with lower quality resources offer 1 yuan / kWh (16 US¢ / kWh) (Chinese announcement).

Additionally, distributed solar electricity consumed on-site (which could be anything from rooftops to factories with panels) will receive a 0.42 yuan / kWh (6.9 US¢ / kWh) subsidy. Excess electricity sold back on the grid, where grid connections and policy are in place, will be at the prevailing coal tariff, ranging from 0.3-0.5 yuan / kWh (5-8 US¢ / kWh). It is unclear if these adjustments will mitigate the expected large financial demands to support solar (whose FIT outlays per kWh are still more than double wind).

Wind, whose FIT has been in place since 2009, may not be immune to this restructuring either. Some cite the falling cost of wind equipment and the fund gap as cause for scaling back wind subsidies.

Where to Go From Here

Despite this budget squeeze, the Chinese government seems intent on sustaining the clean energy push. Even as it weakens financial incentives for renewable energy, the central government is getting smarter about how to achieve its long-term clean energy targets. Last year the National Energy Administration (NEA) released draft renewable portfolio standards (RPS), which would replace the mandatory share program with a tighter target focused on generation: an average of 6.5% from non-hydro renewables by 2015. Grid companies will have purchase requirements ranging from 3% to 15%, and provincial consumption targets range from 1% to 15% (more details here, subscription req’d). This approach appropriately recognizes the myriad regulatory barriers to increasing wind uptake by putting responsibility for meeting targets on all stakeholders.

China is paving new ground as it shifts further toward low-carbon sources of electricity. What has worked in the past, when wind and solar’s contributions to China’s energy mix were minor, will likely not be sufficient to meet cost constraints and integration challenges out to 2020. As with all policies in China, designing the policy is less than half the battle; implementation and enforcement are central to changing to the status quo.