News + Media
Vulnerable coastal regions could face storm surges of unprecedented magnitude in the next century
Jennifer Chu | MIT News Office
"Grey swan" cyclones — extremely rare tropical storms that are impossible to anticipate from the historical record alone — will become more frequent in the next century for parts of Florida, Australia, and cities along the Persian Gulf, according to a study published today in the journal Nature Climate Change.
In contrast with events known as “black swans” — wholly unprecedented and unexpected occurrences, such as the 9/11 attacks and the 2008 financial collapse — grey swans may be anticipated by combining physical knowledge with historical data.
In the case of extreme tropical cyclones, grey swans are storms that can whip up devastating storm surges, beyond what can be foreseen from the weather record alone — but which may be anticipated using global simulations, along with historical data.
In the current paper, authors Kerry Emanuel, the Cecil and Ida Green Professor in Earth and Planetary Sciences at MIT, and Ning Lin of Princeton University simulated the risk of grey swan cyclones, and their resulting storm surges, for three vulnerable coastal regions. They found a risk of such storms for regions such as Dubai, United Arab Emirates, where tropical storms have never been recorded. In Tampa, Florida, and Cairns, Australia — places that experience fairly frequent storms — storms of unprecedented magnitude will be more likely in the next century.
“These are all locations where either no one’s anticipated a hurricane at all, such as in the Persian Gulf, or they’re simply not aware of the magnitude of disaster that could occur,” Emanuel says.
Beyond forecasts
To date, the world has yet to see a black swan or grey swan cyclone: Every hurricane that has ever occurred in recorded history could, in retrospect, have been predicted, given the previous pattern of storm activity.
“In the realm of storms, I can’t really think of an example in the last five or six decades that anybody could call a black swan,” Emanuel says. “For example, Hurricane Katrina was anticipated on the timescale of many years. Everybody knew New Orleans was going to get hammered. Katrina was not meteorologically unusual at all.”
However, as global warming is expected to significantly alter the Earth’s atmosphere and oceans in the coming decades, the track and magnitude of hurricanes may skew widely from historical patterns.
To get a sense of the frequency of grey swan cyclones in the next century, Emanuel and Lin employed a technique that Emanuel’s team developed 10 years ago, in which they embed a detailed hurricane model into a global climate model.
For this paper, the team embedded the hurricane model into six separate climate models, each of which is based on environmental data from the past, or projections for the future. For each simulation, they generated, or “seeded,” thousands of randomly distributed nascent storms, and observed which storms produced unprecedented storm surges, given environmental factors such as temperature and location.
From their simulations, the researchers observed that storm surges from grey swan cyclones could reach as high as 6 meters, 5.7 meters, and 4 meters in Tampa, Cairns, and Dubai, respectively in the current climate. By the end of the century, surges of 11 meters and 7 meters could strike Tampa and Dubai, respectively.
Changing risk
To put this in perspective, the last big hurricane to hit Tampa, in 1921, produced a devastating storm surge that measured 3 meters, or about 9 feet high.
“A storm surge of 5 meters is about 17 feet, which would put most of Tampa underwater, even before the sea level rises there,” Emanuel says. “Tampa needs to have a good evacuation plan, and I don’t know if they’re really that aware of the risks they actually face.”
Emanuel says that Dubai, and the rest of the Persian Gulf, has never experienced a hurricane in recorded history. Therefore, any hurricane, of any magnitude, would be an unprecedented event.
“Dubai is a city that’s undergone a really rapid expansion in recent years, and people who have been building it up have been completely unaware that that city might someday have a severe hurricane,” Emanuel says. “Now they may want to think about elevating buildings or houses, or building a seawall to somehow protect them, just in case.”
Upper limit shift
The team also found that as storms grow more powerful in the coming century, with climate change, the most extreme storms will become more frequent.
The team’s results show that the expected frequency for a grey swan cyclone with a 6-meter storm surge in Tampa would fall from 10,000 years today to as little as 700 years by the end of the century. Put another way, today Tampa has a one in 10,000 chance of being struck by a devastating grey swan cyclone in any given year — odds that will remain the same next week, or next year.
“Hurricanes, unlike earthquakes, are like a roll of the die,” Emanuel says. “Just because you had a big hurricane last year doesn’t make it more or less likely that you’d have a big hurricane next year.”
But in 100 years, Tampa’s odds of a 6-meter storm surge will be 14 times higher, as the world’s climate shifts.
“What that really translates to is, you’re going to see an increased frequency of the most extreme events,” Emanuel says. “Whereas the upper limit of hurricane wind speeds today might be 200 mph, 100 years from now it might be 220 mph. That means you’re going to start seeing hurricanes that you’ve never seen before.”
The group’s estimates of extreme storm intensity, while high, are not unrealistic for the coming century, says Greg Holland, senior scientist at the National Center for Atmospheric Research.
“This is an excellent example of the type of study needed to fill out our knowledge of what is possible with damaging events such as storm surge,” says Holland, who was not involved in the study. “Although the events listed are … rare, a knowledge of their possibility helps considerably with assessing more likely events in planning.”
This research was funded in part by the National Science Foundation.
Study explains how rain droplets attract aerosols out of the atmosphere
Jennifer Chu | MIT News Office
As a raindrop falls through the atmosphere, it can attract tens to hundreds of tiny aerosol particles to its surface before hitting the ground. The process by which droplets and aerosols attract is coagulation, a natural phenomenon that can act to clear the air of pollutants like soot, sulfates, and organic particles.
Atmospheric chemists at MIT have now determined just how effective rain is in cleaning the atmosphere. Given the altitude of a cloud, the size of its droplets, and the diameter and concentration of aerosols, the team can predict the likelihood that a raindrop will sweep a particle out of the atmosphere.
The researchers carried out experiments in the group’s MIT Collection Efficiency Chamber — a 3-foot-tall glass chamber that generates single droplets of rain at a controlled rate and size. As droplets fell through the chamber, researchers pumped in aerosol particles, and measured the rate at which droplets and aerosols merged, or coagulated.
From the measurements, they calculated rain’s coagulation efficiency — the ability of a droplet to attract particles as it falls. In general, they found that the smaller the droplet, the more likely it was to attract a particle. Conditions of low relative humidity also seemed to encourage coagulation.
Dan Cziczo, an associate professor of atmospheric chemistry at MIT, says the new results, published this month in the journal Atmospheric Chemistry and Physics, represent the most accurate values of coagulation to date. These values, he says, may be extrapolated to predict rain’s potential to clear a range of particles in various environmental conditions.
“Say you’re a modeler and want to figure out how a cloud in Boston cleans the atmosphere versus one over Chicago that’s much higher in altitude — we want you to be able to do that, with this coagulation efficiency number we produce,” Cziczo says. “This can help address issues such as air quality and human health, as well as the effect of clouds on climate.”
The paper’s co-authors are postdoc Karin Ardon-Dryer and former postdoc Yi-Wen Huang.
Overestimating rain
Cziczo’s group is not the first to simulate the interaction of rain and aerosols in the lab. Over the past decade, others have built intricate chambers to track coagulation. But the MIT researchers found these events were very rare, and extremely difficult to pick out. Scientists had known that a droplet’s electric charge plays a big role in attracting particles, so Cziczo and his colleagues began to alter the charges of droplets and particles to force coagulation to occur.
“This is where we really started getting ourselves in trouble as scientists,” Cziczo says of the field. “To actually get the process to work, people were tuning it into a range that was not atmospherically relevant.”
As a result, researchers were seeing many more coagulation events. However, the results were based on electric charges that were much higher than what had been observed in the atmosphere.
“In some cases, we were seeing people using 10 or 100 times the charge, which maybe you’d only see in the middle of the most severe thunderstorm ever,” Cziczo says.
The experiments, Cziczo says, essentially overestimated rain’s cleansing effects.
Stripping a droplet
To get a more accurate picture of coagulation, Cziczo’s group constructed a new chamber with a single-droplet generator, an instrument that can be calibrated to produce single droplets at a specific size, frequency, and charge. Typically, droplet generators impart too much charge onto a droplet. To produce electrical charges that droplets actually carry in the atmosphere, the team used a small radioactive source to strip away a small amount of charge from each droplet.
The team then pumped the lower part of the chamber with aerosol particles of a known size. As they fell to the floor, droplets evaporated, leaving only salt — and, if coagulation occurred, aerosols. The residual particles were then piped through a single particle mass spectrometer, which determined whether salt — and thereby, the droplet — attracted an aerosol.
The researchers ran multiple experiments, varying the relative humidity of the chamber, as well as the droplet size and frequency. They calculated the coagulation efficiency for each run, and found that smaller droplets were more likely to attract aerosols, particularly under conditions of low relative humidity.
Ultimately, Cziczo says, a better understanding of particle and droplet interactions will give scientists a clearer idea of climate change’s trajectory: One of the major uncertainties in global warming projections is how greenhouse gases will affect cloud formation. As clouds play a major role in maintaining the Earth’s radiative budget — how much heat is trapped or escapes — Cziczo says it’s essential to understand the relationship between a cloud’s water droplets and particles in the atmosphere.
“This type of data is lacking in the literature and should improve model simulations of how cloud and fog droplets can scavenge aerosol particles,” says Margaret Tolbert, a professor of chemistry and biochemistry at the University of Colorado who was not involved in the study. “Improvements in understanding aerosol microphysics ultimately helps with predictions of air quality and climate change, since aerosols are central to both.”
This research was funded, in part, by the National Oceanic and Atmospheric Administration.
Paul O'Gorman: Extreme storm modeler
Jennifer Chu | MIT News Office
Several winters back, while shoveling out his driveway after a particularly heavy snowstorm, Paul O’Gorman couldn’t help but wonder: How is climate change affecting the Boston region’s biggest snow events?
The question wasn’t an idle one for O’Gorman: For the past decade, he’s been investigating how a warming climate may change the intensity and frequency of the world’s most extreme storms and precipitation events.
In 2014, O’Gorman decided to look into how increased warming may affect daily snowfall around the world. In a Nature study that has since been widely quoted, he reported that while most of the Northern Hemisphere will see less total snowfall in a warmer climate, regions where the average winter temperature is near a “sweet spot” will still experience severe blizzards that dump over a foot of snow in a single day.
As it happened, the following winter in Boston produced consecutive blizzards that covered the city in a record-breaking 110 inches of snow, with much of it falling in a single month.
O’Gorman was on sabbatical in Australia at the time, and missed the towering snowbanks, damaging ice dams, and citywide gridlock. But Boston’s extreme winter has spurred a follow-on project for O’Gorman, who recently was awarded tenure as associate professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).
“While I have previously studied daily snowfall, it would definitely be interesting to study these extreme monthly snowfalls,” O’Gorman says. “They obviously can have a big impact in an urban environment, as we saw in Boston.”
“Cross-fertilization of ideas”
O’Gorman grew up in Tullamore, a small town in the midlands of Ireland that, like the rest of the country, receives frequent rainfall throughout the year, but seldom experiences very heavy rainfall or snowfall.
Extreme precipitation was far from O’Gorman’s focus when he enrolled at Trinity College in Dublin. There, he chose to study theoretical physics, and later fluid dynamics, which gave him the opportunity to work with supercomputers to simulate fluid flow — work that earned him a master’s degree in high-performance computing.
At the time, O’Gorman was interested in applying his work in fluid dynamics to problems related to turbulence generated by aircraft. In 1999, he headed to the United States, where he pursued a PhD in aeronautics at Caltech.
“That was a bit of a jump culturally, for sure,” O’Gorman recalls. “One of the nice things is, Caltech is kind of a small place where, like MIT, there’s a lot of cross-fertilization of ideas.”
In fact, O’Gorman’s interest in atmospheric science grew out of just such an opportunity. As part of his studies in aeronautical engineering, he took an elective on turbulence in the atmosphere and ocean, taught by climate scientist Tapio Schneider.
“[The class] totally changed the course of my career and interests,” O’Gorman says.
“I had been studying turbulence on small scales, and now I was learning about turbulence at the planetary scale. What struck me about the fluid flow of the atmosphere was that there are different layers, as well as the rotation of the planet, clouds, precipitation, and radiation all interacting at the same time, and there were a lot of unanswered questions that, to me, were all pretty fascinating.”
After earning a PhD in aeronautics, O’Gorman switched career paths, and worked with Schneider as a postdoc, investigating turbulence in the atmosphere — and in particular, the atmosphere’s response to global warming. When Schneider was invited to a scientific meeting on extreme events, O’Gorman began a research project that ultimately set his course on the study of extreme precipitation.
Climate shift
In 2008, O’Gorman joined the EAPS faculty as an assistant professor, and has since been exploring the relationship between atmospheric warming and the atmospheric circulation and extreme events.
Part of his research continues the work he did as a postdoc with Schneider, in which the two studied climate change’s effect on water vapor: As the climate warms, there is more water vapor in the atmosphere, which in turn acts to further heat the atmosphere. The effect of water vapor and latent heat release has not yet been fully incorporated in existing theories of the atmosphere. O’Gorman says understanding water vapor’s role could help explain how climate change affects rapidly deepening storms at mid-latitude locations, such as the United States and Europe.
While much of his work is based on theoretical modeling, O’Gorman occasionally works with actual weather observations. In 2013, he looked to data collected by weather balloons around the world to see how the atmosphere’s temperature varies with altitude in recent decades. There exists a temperature gradient in the lowest layer of the atmosphere, in which temperatures get colder with altitude. O’Gorman and his student Martin Singh had predicted that as the climate warms, this gradient will essentially shift upward. However, the theory hadn’t been tested with observations.
O’Gorman and Singh analyzed data from weather balloons around the world, each of which took temperature measurements as it rose up through the atmosphere. They found that, based on the measurements, the atmosphere’s temperature profile did indeed seem to be shifting upward over time, consistent with the theory.
“We found if you look at the temperature profile in the current climate, you can predict what it will do in a warmer climate,” O’Gorman says. “This is one of the factors that affects how much radiation is emitted to space, which affects how much the planet warms.”
In the next few years, he hopes to take advantage of the increasing computing power of climate models to track the intensity of rain and snowstorms in response to influences such as greenhouse gases.
“Computers have gotten powerful enough now that we can do simulations of the whole globe, while resolving clouds to some extent,” O’Gorman says. “We can study how convection organizes itself on all sorts of different scales, all the way up to planetary scales. So I think this is a very exciting moment.”
New findings show Asia produces twice as much mercury emissions as previously thought.
Jennifer Chu | MIT News Office
Once mercury is emitted into the atmosphere from the smokestacks of power plants, the pollutant has a complicated trajectory; even after it settles onto land and sinks into oceans, mercury can be re-emitted back into the atmosphere repeatedly. This so-called “grasshopper effect” keeps the highly toxic substance circulating as “legacy emissions” that, combined with new smokestack emissions, can extend the environmental effects of mercury for decades.
Now an international team led by MIT researchers has conducted a new analysis that provides more accurate estimates of sources of mercury emissions around the world. The analysis pairs measured air concentrations of mercury with a global simulation to calculate the fraction of mercury that is either re-emitted or that originates from power plants and other anthropogenic activities. The result of this work, researchers say, could improve estimates of mercury pollution, and help refine pollution-control strategies around the world.
The new analysis shows that Asia now releases a surprisingly large amount of anthropogenic mercury. While its increased burning of coal was known to exacerbate mercury emissions and air pollution, the MIT team estimates that Asia produces more than double the mercury emissions previously estimated.
Noelle Selin, the Esther and Harold E. Edgerton Career Development Associate Professor in MIT’s Institute for Data, Systems, and Society and the Department of Earth, Atmospheric and Planetary Sciences, says the new analysis can also give scientists a better idea of how long legacy emissions — mercury re-emitted by the land and ocean — stick around in the atmosphere. This is because the analysis can more accurately calculate the total amount emitted by land and ocean sources.
“The timescale under which mercury circulates in the environment tells us about how fast we’ll recover if we limit mercury emissions,” Selin says. “We can better quantify mercury cycling with this method.”
Selin and Shaojie Song, a graduate student in EAPS, have published their results in the journal Atmospheric Chemistry and Physics.
Top-down approach
The team’s analysis improves on other models that take a bottom-up approach. Such models estimate mercury emissions for a region by considering factors such as the amount of coal burned in a power plant and the types of equipment in a plant used to control emissions. Models then often extrapolate data from a few sources to apply to an entire region.
However, there are a number of uncertainties with such bottom-up modeling, and it’s often difficult to obtain the required data from individual power plants.
Instead, Selin and Song’s analysis takes a top-down approach, combining bottom-up estimates with actual measurements of mercury emissions from monitoring stations around the world.
In their analysis, the team took bottom-up estimates of mercury emissions from a 2010 emissions inventory by the United Nations — a frequently used source of estimated anthropogenic emissions around the world. The researchers plugged these emissions into a global mercury transport model called GEOS-Chem — a model originally developed by Selin that has since been used widely to track how mercury circulates through the land, ocean, and air.
The GEOS-Chem model essentially divides the atmosphere into many small boxes. After plugging in bottom-up estimates of mercury emissions, the researchers ran the model to simulate the chemical and physical processes that act to circulate mercury within and between boxes. They then obtained actual measurements of mercury emissions taken from monitoring stations around the world. They compared each station’s measurements with the model’s estimates for the corresponding box in which the station was located.
“We can definitely see some differences, which tells us the [bottom-up] emissions may be wrong in some places,” Song says.
Assessing policy levers to address mercury
For locations where the measured and modeled emissions did not match up, the group used a Bayesian inversion method — a mathematical probability theory that combines observations with prior knowledge to model uncertainty. With this method, Selin and Song determined, for each station’s location, the quantitative contribution of mercury sources that would make up the total measured concentrations. For example, a location in the middle of the ocean, far from any terrestrial sources, is more likely to see mercury emissions that are re-emitted from the ocean, rather than from terrestrial or anthropogenic sources. Then, in a first application of these techniques to global mercury concentrations, they used this quantitative approach to calculate what measured concentrations implied about their sources.
By their calculations, the researchers estimated that, worldwide, Asia could produce up to 1,770 tons of mercury emissions per year — more than twice the amount estimated by bottom-up models.
“It was higher than we expected,” Selin says. “Given the pollution in China and India and increased use of coal, it does make sense. It wasn’t an out-of-the-ballpark result, but it does give you some pause to think about how much mercury could be coming out of Asia.”
In related work published this spring, Selin’s research group assessed how a new U.N. treaty could affect future mercury emissions from coal-fired power plants in Asia. They concluded that future emissions controls would have both global and domestic benefits.
In their more recent top-down analysis, the team also found that fewer mercury emissions came from terrestrial sources, meaning the land re-emitted a smaller amount of legacy emissions than expected — a small silver lining in a world of continuing mercury pollution, according to Selin.
“That means that legacy mercury is a smaller fraction of present-day mercury emissions than we thought, which means that the policy lever for addressing mercury pollution through controlling current emissions is slightly larger,” Selin says. “You still have to worry a lot about legacy emissions, but we could recover a bit more quickly because they are a smaller fraction of the total.”
This research was funded, in part, by the National Science Foundation.
Study finds many species may die out and others may migrate significantly as ocean acidification intensifies.
by Jennifer Chu | MIT News Office
Oceans have absorbed up to 30 percent of human-made carbon dioxide around the world, storing dissolved carbon for hundreds of years. As the uptake of carbon dioxide has increased in the last century, so has the acidity of oceans worldwide. Since pre-industrial times, the pH of the oceans has dropped from an average of 8.2 to 8.1 today. Projections of climate change estimate that by the year 2100, this number will drop further, to around 7.8 — significantly lower than any levels seen in open ocean marine communities today.
Now a team of researchers from MIT, the University of Alabama, and elsewhere has found that such increased ocean acidification will dramatically affect global populations of phytoplankton — microorganisms on the ocean surface that make up the base of the marine food chain.
In a study published today in the journal Nature Climate Change, the researchers report that increased ocean acidification by 2100 will spur a range of responses in phytoplankton: Some species will die out, while others will flourish, changing the balance of plankton species around the world.
The researchers also compared phytoplankton’s response not only to ocean acidification, but also to other projected drivers of climate change, such as warming temperatures and lower nutrient supplies. For instance, the team used a numerical model to see how phytoplankton as a whole will migrate significantly, with most populations shifting toward the poles as the planet warms. Based on global simulations, however, they found the most dramatic effects stemmed from ocean acidification.
Stephanie Dutkiewicz, a principal research scientist in MIT’s Center for Global Change Science, says that while scientists have suspected ocean acidification might affect marine populations, the group’s results suggest a much larger upheaval of phytoplankton — and therefore probably the species that feed on them — than previously estimated.
“I’ve always been a total believer in climate change, and I try not to be an alarmist, because it’s not good for anyone,” says Dutkiewicz, who is the paper’s lead author. “But I was actually quite shocked by the results. The fact that there are so many different possible changes, that different phytoplankton respond differently, means there might be some quite traumatic changes in the communities over the course of the 21st century. A whole rearrangement of the communities means something to both the food web further up, but also for things like cycling of carbon.”
The paper’s co-authors include Mick Follows, an associate professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences.
Winners and losers
To get a sense for how individual species of phytoplankton react to a more acidic environment, the team performed a meta-analysis, compiling data from 49 papers in which others have studied how single species grow at lower pH levels. Such experiments typically involve placing organisms in a flask and recording their biomass in solutions of varying acidity.
In all, the papers examined 154 experiments of phytoplankton. The researchers divided the species into six general, functional groups, including diatoms, Prochlorococcus, and coccolithophores, then charted the growth rates under more acidic conditions. They found a whole range of responses to increasing acidity, even within functional groups, with some “winners” that grew faster than normal, while other “losers” died out.
The experimental data largely reflected individual species’ response in a controlled laboratory environment. The researchers then worked the experimental data into a global ocean circulation model to see how multiple species, competing with each other, responded to rising acidity levels.
The researchers paired MIT’s global circulation model — which simulates physical phenomena such as ocean currents, temperatures, and salinity — with an ecosystem model that simulates the behavior of 96 species of phytoplankton. As with the experimental data, the researchers grouped the 96 species into six functional groups, then assigned each group a range of responses to ocean acidification, based on the ranges observed in the experiments.
Natural competition off balance
After running the global simulation several times with different combinations of responses for the 96 species, the researchers observed that as ocean acidification prompted some species to grow faster, and others slower, it also changed the natural competition between species.
“Normally, over evolutionary time, things come to a stable point where multiple species can live together,” Dutkiewicz says. “But if one of them gets a boost, even though the other might get a boost, but not as big, it might get outcompeted. So you might get whole species just disappearing because responses are slightly different.”
Dutkiewicz says shifting competition at the plankton level may have big ramifications further up in the food chain.
“Generally, a polar bear eats things that start feeding on a diatom, and is probably not fed by something that feeds on Prochlorococcus, for example,” Dutkiewicz says. “The whole food chain is going to be different.”
By 2100, the local composition of the oceans may also look very different due to warming water: The model predicts that many phytoplankton species will move toward the poles. That means that in New England, for instance, marine communities may look very different in the next century.
“If you went to Boston Harbor and pulled up a cup of water and looked under a microscope, you’d see very different species later on,” Dutkiewicz says. “By 2100, you’d see ones that were living maybe closer to North Carolina now, up near Boston.”
Dutkiewicz says the model gives a broad-brush picture of how ocean acidification may change the marine world. To get a more accurate picture, she says, more experiments are needed, involving multiple species to encourage competition in a natural environment.
“Bottom line is, we need to know how competition is important as oceans become more acidic,” she says.
This research was funded in part by the National Science Foundation, and the Gordon and Betty Moore Foundation.
MIT analysis informs a new EPA report on the effects of curbing climate change.
"
Reducing global greenhouse gas emissions could have big benefits in the U.S., according to a report released today by the U.S. Environmental Protection Agency (EPA), including thousands of avoided deaths from extreme heat, billions of dollars in saved infrastructure expenses, and prevented destruction of natural resources and ecosystems.
The report, “Climate Change in the United States: Benefits of Global Action,” relies on research developed at the MIT Joint Program on the Science and Policy of Global Change to estimate the effects of climate change on 22 sectors in six areas: health, infrastructure, electricity, water resources, agriculture and forestry, and ecosystems. The report compares two possible futures: one with significant global action on climate change, and one in which greenhouse gases continue to rise.
“Understanding the risks posted by future climate change informs policy decisions designed to address those risks,” says John Reilly, co-director of the MIT Joint Program on the Science and Policy of Global Change. “This report quantifies the risks we might face by taking no action.”
The MIT researchers developed two suites of future climate scenarios, socioeconomic scenarios, and technological assumptions that serve as the foundation of the EPA report’s findings. In the first scenario, no new constraints were placed on greenhouse gas emissions. In the second, global warming was limited to 2 degrees Celsius through global climate action.
Research groups across the country then built on the scenarios developed at MIT to study how different sectors in the U.S. would fare under each future scenarios. The groups studied a diverse range of impacts of climate change according to their own areas of expertise, ranging from lost wages due to extreme temperatures, to damage to bridges from heavy river flows, to destruction of Hawaii’s coral reefs, among others.
The MIT team also contributed heavily to the section of the report focusing on water resources. The report concludes that mitigating greenhouse gas emissions can reduce the risk of both damaging floods and droughts, and prevent future water management issues.
“Water is fundamentally linked to climate.” Reilly says. “Water needs to be in the right place at the right time. So as temperatures rise and precipitation patterns shift, you run the risk of having a mismatch between demand for water and the available supply in an area.”
The report is part of the ongoing Climate Impacts and Risk Analysis (CIRA) program, an EPA-led collaborative modeling effort among teams in the federal government, MIT, the Pacific Northwest National Laboratory, the National Renewable Energy Laboratory, and several consulting firms.
The scenarios developed at MIT serve as a common tie between the results produced by the many teams participating in the project. Each team used the MIT scenarios as inputs to their own modeling tools, uniting all of the estimates in the report with a set of shared assumptions about emissions growth and possible changes in future climate.
The report summarizes more than 35 studies that were individually peer reviewed in scientific journals. The full report and related materials are available at epa.gov/cira.
MIT Energy Initiative | China Energy and Climate Project researchers conclude that by designing and implementing aggressive long-term measures now, Chinese policy makers will put the nation on a path to achieve recently pledged emissions reductions with relatively modest impacts on economic growth.
By Nancy W. Stauffer, MIT Energy Initiative
Overview
Researchers from MIT and Tsinghua University in Beijing are collaborating to bring new insights into how China—now the world’s largest emitter of carbon dioxide (CO2)—can reverse the rising trajectory of its CO2 emissions within two decades. They use a newly developed global energy-economic model that separately represents details of China’s energy system, industrial activity, and trade flows. In a recent study, the team estimated the impact on future energy use, CO2 emissions, and economic activity of new policies announced in China, including a price on carbon, taxes on fossil fuel resources, and nuclear and renewable energy deployment goals. The researchers conclude that by designing and implementing aggressive long-term measures now, Chinese policy makers will put the nation on a path to achieve recently pledged emissions reductions with relatively modest impacts on economic growth.
—
Valerie Karplus of the MIT Sloan School of Management (left) and Xiliang Zhang of Tsinghua University pose on the Tsinghua campus. In their joint research, they are using a novel model to investigate the impacts of Chinese energy and climate policies on the country’s future energy use, carbon dioxide emissions, and economic activity. Photo courtesy of the Tsinghua-MIT China Energy and Climate Project
In November 2014, the presidents of the United States and China delivered a joint announcement committing their countries to new, aggressive measures to curb carbon emissions. Those pledges were seen as a breakthrough in global climate change negotiations. Until recently, binding commitments were in place only for a group of developed nations together responsible for about 15% of global carbon emissions. The new action involves a developed nation and a developing nation that together represent 45% of all emissions today. Moreover, it breaks a longstanding stalemate between the United States and China in which each has been waiting for the other to act first, and it sets the stage for other developing nations to declare their commitments to the global effort.
China’s pledge represents an ambitious target for a rapidly growing country. The country pledged to turn around the constant growth in its CO2 emissions by 2030 at the latest and to increase the fraction of its energy coming from zero-carbon sources to 20% by the same year—approximately double the share it has achieved so far. The commitment raises serious questions: Are those goals realistic, and if so, what new actions will be needed to accomplish them?
“To meet its new 2030 targets, China will need to take aggressive steps, including introducing a nationwide price on carbon emissions as well as preparing for the safe and efficient deployment of nuclear and renewable energy at large scale,” says Valerie J. Karplus, assistant professor of global economics and management at the MIT Sloan School of Management and director of the Tsinghua-MIT China Energy and Climate Project (CECP). “But with strong action, China’s targets are credible and within reach.”
Her assessment is based on studies that she and her CECP colleagues at MIT and Tsinghua University had been performing before the joint pledge was announced. The work was motivated by policy changes that were already occurring within China. In January 2013, an extreme episode of bad air quality and subsequent public outcry led to the adoption the following September of a new air pollution action plan. A few months later, Chinese policy makers at a major government summit pledged to tackle environmental problems by using new market-based instruments, including an emissions trading system that will put a price on CO2 emissions as well as taxes on fossil fuel resources that will incentivize firms to conserve energy.
“So whether it was for reasons of bad air quality or greater concern about climate change or deeper interest in market reforms, the winds of change were blowing,” says Karplus. “And we thought, ‘Well, we need to model this!’” A theoretical analysis could produce insights into how such policies might affect China’s energy system and carbon emissions, and it could shed light on the level of carbon tax that might be required to achieve a given emissions reduction—information that could help guide Chinese policy makers as they further define the details of their plans.
New model, new analyses
To perform their study, the MIT and Tsinghua collaborators used the China-in-Global Energy Model (C-GEM), which MIT researchers and Tsinghua graduate students developed while the Tsinghua students were visiting MIT three years ago. “In our joint research we use a model that was built with methods largely contributed by MIT but with detailed data and insights largely provided by our Tsinghua colleagues,” says Karplus.
Specifically, the researchers calibrated the model using domestic economic and energy data for China in both 2007 and 2010. And rather than combining all the energy-intensive industries into a single sector, they divided that group into six distinct sectors—both within China and within the 18 additional regions that represent the rest of the world in the model. That disaggregation is important for two reasons, says Karplus: Those six sectors differ in energy intensity and growth trends, and in China—unlike in many developed economies—they make up a significant share of economic activity and account for a large share of emissions. Finally, the researchers incorporated into the model changes in China’s economic structure that may occur as per capita income increases over time. In particular, the main driver of economic growth gradually shifts away from investment (for example, in infrastructure development) and toward consumption.
In their analysis of China’s policy initiatives, they assume two scenarios with differing levels of policy effort, plus one more scenario that assumes that no energy or climate policies are in place after 2010—an approach that is generally viewed as unsustainable but here serves as a baseline for comparison. The detailed assumptions for each scenario appear in the table below.
The Continued Effort (CE) scenario assumes that China remains on the path of reducing CO2 intensity (carbon emissions per dollar of GDP) by about 3% per year through 2050, consistent with an extension of commitments the country made at global climate talks in 2009. Importantly, the researchers find that a carbon price is needed to achieve such a reduction in carbon intensity; the needed improvements in energy efficiency and emissions do not result from normal equipment turnover and upgrading, as they have in the past. The CE scenario also assumes the extension of existing measures including resource taxes on crude oil, natural gas, and coal; a “feed-in tariff” that guarantees returns to renewable electricity generators; and increased deployment of hydroelectricity and of nuclear electricity (here assumed to be mandated by government).
The Accelerated Effort (AE) scenario is designed to achieve a more aggressive CO2 reduction—4% per year—and includes a carbon price consistent with that target. It assumes the same feed-in tariff as under the CE scenario, but now the assumed cost of integrating intermittent renewables is lower. It also assumes higher resource taxes on fossil fuels and greater deployment of nuclear electricity beyond 2020.
Policy assumptions in each scenario.

Impact on energy demand and emissions
The first figure below shows total energy demand over time for the three scenarios, plus the actual breakdown of primary energy use by source in 2010 and estimates thereafter from the AE scenario only. The second figure shows total CO2 emissions between 2010 and 2050 for each of those scenarios.
With no energy or climate policies in effect (the “No Policy” scenario), total CO2 emissions continue to rise through 2050, with no peak in sight. Rising emissions are mainly due to continued reliance on China’s domestic coal resources. In 2050, more than 66% of all energy comes from coal—more than 2.8 times the current level, which is already widely viewed as untenable within China.
In the CE scenario, total energy use is well below the No Policy case, and that decline generates disproportionately high reductions in emissions. Carbon emissions level off at about 12 billion metric tons (bmt) in the 2030 to 2040 time frame. The CO2 charge needed to achieve that outcome reaches $26/ton CO2 in 2030 and $58/ton CO2 in 2050. Deployment of non-fossil energy is significant, and its share of total energy demand climbs from 15% in 2020 to about 26% through 2050. Nuclear power expands significantly to 11% of total primary energy in 2050. Coal continues to account for a significant share of primary energy demand (39% in 2050). The share of natural gas use nearly doubles between 2030 and 2050, while the share of oil use increases slightly over the same period.
Under the AE scenario, CO2 emissions level off in the 2025 to 2035 time frame, peaking at about 10 bmt—about 20% above current emissions levels. The carbon price rises from $38/ton CO2 in 2030 to $115/ton CO2 in 2050. Those prices are substantially above the levels in the CE scenario, but CO2 emissions now peak as much as a decade earlier.
Demand for energy in the three scenarios. This figure shows total energy demand in the No Policy, Continued Effort, and Accelerated Effort scenarios, with the primary energy mix shown for the Accelerated Effort scenario only. Both levels of policy effort significantly decrease total energy demand relative to the baseline No Policy scenario, which assumes that no energy or climate policies are in effect after 2010.

Total CO2 emissions in China in the three scenarios. The decreases in energy demand in the Continued Effort and Accelerated Effort scenarios (shown in top figure) bring about even greater reductions in CO2 emissions. While the Accelerated Effort scenario involves more aggressive measures than the Continued Effort scenario does, it causes CO2 emissions to peak about a decade earlier and brings more substantial decreases thereafter.
Under the AE scenario, non-fossil energy accounts for fully 39% of the primary energy mix by 2050. Wind, solar, and biomass electricity continue to increase through 2050 (as they do in the CE scenario), and nuclear is now 16% of the total energy mix. Despite its relatively low carbon content, natural gas is eventually penalized by the increasing carbon price. Between 2045 and 2050, natural gas actually starts to decline as a share in absolute terms. Coal’s share drops dramatically—from 70% in 2010 to about 28% by 2050, with peak use occurring in about 2020. In contrast, oil’s share of energy use continues to increase through 2050.
The differing outcomes for coal and oil are largely due to the availability and cost of substitutes. Coal is the least expensive fuel to displace; it has many substitutes, including wind, solar, nuclear, and hydro in the power sector and natural gas and biomass in industrial processes. In contrast, fewer substitutes are available for oil-based liquid fuels used in transportation; and switching to the alternatives—for example, biobased fuels or electric vehicles—is a relatively expensive way to reduce CO2 emissions. “As a result, oil consumption is relatively insensitive to a carbon price,” says Karplus. “So China seems set to account for a significant share of global oil demand over the period being considered.”
Relevance for the new Chinese pledge
The findings of the Tsinghua-MIT analysis have direct relevance for current policy making in China. Under the new bilateral agreement with the United States, China needs to start reducing emissions at or before the year 2030. In the AE scenario, the researchers find that by starting now, China should be able to meet that 2030 target at an added cost to the economy that rises to just 2.6% of China’s domestic consumption by 2050—a relatively modest impact on the country’s economic development. “While we are currently working on detailed calculations, we expect that the economic cost will be offset—at least to some extent—by associated reductions in the environmental and health costs of China’s coal-intensive energy system,” says Karplus.
The CECP researchers are continuing to study how different energy and climate policies in China could be used to support the achievement of the China-US agreement. For example, they are looking at the carbon trading system now being tested in several regions of China, in particular, examining which provinces win or lose under the carbon price and how policy design choices can mitigate uneven impacts. And they’re investigating how a carbon price affects air pollution and how air pollution policies affect carbon emissions.
Karplus stresses that their work is not meant to be a crystal ball that tells the future. “It’s really intended to develop our collective intuition of the level of effort required to change China’s energy system,” she says. “And because the CECP involves both Chinese and US contributors, we are in a position to offer analyses and outputs that we hope will be trusted and valued by policy makers in both countries as they work to strengthen the bilateral relationship.”
Already Chinese policy makers are benefiting from the CECP, according to Professor Xiliang Zhang, CECP lead and director of the Institute of Energy, Environment, and Economy at Tsinghua University. He says, “The work of the CECP has played an important role in helping policy makers in China understand the challenges and opportunities that will accompany the country’s low-carbon energy transformation.”
This research is supported by Eni S.p.A., the French Development Agency (AFD), ICF International, and Shell International Limited, founding sponsors of the MIT-Tsinghua China Energy and Climate Project (CECP). (Eni S.p.A. and Shell are also Founding Members of the MIT Energy Initiative.) The Energy Information Agency of the US Department of Energy and the Energy Foundation also supported this work as sustaining sponsors. Additional support came from the National Science Foundation of China, the Ministry of Science and Technology of China, the National Social Science Foundation of China, and Rio Tinto China, and from the MIT Joint Program on the Science and Policy of Global Change through a consortium of industrial sponsors and US federal grants. Further information can be found in:
X. Zhang, V.J. Karplus, T. Qi, D. Zhang, and J. He. Carbon Emissions in China: How Far Can New Efforts Bend the Curve? MIT Joint Program on the Science and Policy of Global Change Report No. 267, October 2014.
MIT Spectrum interviews MIT alumnus Kenneth Strzepek, who led a nonpartisan panel of 17 experts to investigate the international water debate between Egypt and Ethiopia in the hopes of forging a common solution.
For millennia, Egypt has relied on the Nile River for its agriculture. So Egyptians were understandably upset in 2011 when their upstream neighbor, Ethiopia, announced plans to build a hydroelectric dam that threatened to reduce the flow out of the spigot: the Grand Ethiopian Renaissance Dam (GERD), sited along a major tributary that contributes most of the water flowing into the Nile. Two years ago, then prime minister Mohammed Morsi even threatened to go to war.
In an effort to break the stalemate, Kenneth Strzepek ’75, SM ’77, PhD ’80 led a nonpartisan panel of 17 experts convened last November through MIT’s new Abdul Latif Jameel World Water and Food Security Lab (J-WAFS) to investigate the issue and forge a common solution. MIT Spectrum spoke this spring with the alumnus—who is currently a research scientist with the MIT Joint Program on the Science and Policy of Global Change and the MIT Center for Global Change Science—about the “great moral dilemma” at the heart of the conflict, and the value of objective advice.
What is your background on water issues in the Nile Basin?
I did my PhD at MIT on water issues in Egypt. For the last 10 years, I’ve been working with the World Bank on the Nile Basin Initiative to come up with a comprehensive framework agreement between all the sovereign states in the region on how to manage the Nile.
What is it that draws you to work on water issues?
Water is such a metaphor for life. At one point, I thought I might go into the ministry. When I went to Africa as an MIT sophomore, I saw the great impact of water on people’s lives, and I realized water resources development was a way I could integrate my faith with my profession by providing physical water as well as spiritual water to people.
What are the roots of the conflict between Egypt and Ethiopia?
Rather than one principle on allocating water across boundaries, the UN has two principles—that all people should have equal access to water within their boundaries, and also that there should be no harm to anybody who is currently developed downstream. Egypt has been using all of this water for thousands of years; if anyone upstream uses some of it, that violates the “do no harm” principle. On the other hand, if 75% of their water comes from Ethiopia, how is it equitable that [Ethiopia] can’t take a drop? So we have this great moral dilemma.
What were the major questions you discussed?
When this dam is completed and filled, it is going to lead to some additional evaporation, and less water going to Egypt, though some suggest that joint operation of the GERD and the Egyptian Aswan High Dam (AHD) could reduce total losses. Could the impact of water loss on Egypt’s economy be offset by Ethiopia selling some of the GERD’s low-cost, clean electricity to Egypt so there would be benefits to both countries? We also knew that since the capacity of the dam is greater than the annual flow of the river, the issue of filling the dam was critical—if Ethiopia started filling the dam and there was a drought, could they stand to wait for years before resuming?
What kind of debates did you have among the members on your panel?
Most of the conclusions were quite universal. When you are not party to a debate, it’s not as impassioned for you. None of us have that history of distrust that the governments have. When Egypt says “We’ve been using that water for 10,000 years,” Ethiopians will say, “Yeah, our water!” Most of us saw that if this was all one country, there would still be upstream-downstream debates, but you could work out a win-win solution.
What are some of the recommendations you made?
The first conclusion is the need to manage the dam cooperatively with the AHD in Egypt. No river with two reservoirs of such size without a plan to operate them in concert will benefit both parties. Not to manage them cooperatively would be a recipe for disaster. Secondly, the dam is going to produce a lot of electricity, but right now there is no sales agreement or connection to export it out of Ethiopia. There needs to be a power plan in place to bring electricity to users or Ethiopia will have no incentive to let water out of the dam through its turbines so it can reach Egypt.
How have the countries responded to the report?
Both Egypt and Ethiopia have commented on the report, and although they expressed some reservations, a week after we presented it, the countries signed a declaration of principles, which is basically an agreement to agree. We can’t know how much impact our report had on that decision, but it was in their hands when they signed the agreement. They are still far away from agreeing to the specific plan to operate the GERD, but the report and follow-up discussions in Cairo and Addis Ababa have outlined a process to facilitate the technical steps towards developing such a plan.
What do you think your report achieved?
A nonpartisan, world-class, international group convened by MIT and including a number of MIT experts has outlined the technical issues facing Ethiopia, Sudan, and Egypt, and has helped put a boundary to negotiations among the countries. We wanted to make this public so there would be some sound technical information out there as they continue their negotiations. We have offered an objective assessment of the current situation and built connections among key water decision makers involved in the basin. I am very proud of what MIT and J-WAFS did; I pray this activity has and will continue to reduce conflict in the region.
Read more about the Grand Ethopian Renaissance Dam report at MIT News.