News + Media

Around Campus
MIT News

Speakers at 10th annual MIT Energy Conference see progress, but great need for more research.

David L. Chandler | MIT News Office

At the conclusion of MIT’s 10th annual Energy Conference, panelist Cheryl Martin, director of the U.S. Department of Energy’s ARPA-E research program, declared, “There is no more important issue than energy.”

Urging students to work to supply sufficient energy for a growing population while reducing emissions, Martin added, “It needs every discipline to be in the game.”

The student-run event was founded by David Danielson PhD ’08, then an MIT graduate student and now the U.S. assistant secretary of energy and director of the Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy. Danielson, who also founded MIT’s Energy Club — now the Institute’s largest student group — returned to campus as both a keynote speaker and as moderator of the final panel discussion at this year’s conference.

“The last 10 years have been a wild ride in energy,” Danielson said, reflecting on dramatic and unexpected changes including the shale-gas boom, the rise of extreme weather events, and a steep drop in the costs of solar and wind power.

Danielson observed that MIT’s long history of cross-disciplinary research has been especially relevant to energy and combating climate change. “This interdisciplinary culture is a critical element,” he said. “The divisions really blur.”

Danielson also reflected on how far the Institute has progressed in energy research and education. When he arrived on campus, in 2001, as a student interested in solar power, he was astonished to find only one course in that field, and no energy-related student organization. Now MIT has dozens of researchers working in the solar arena; the Energy Club has more than 5,000 student, faculty, staff, and alumni members, and organizes more than 100 events every year. Danielson also pointed out that five of the club’s past presidents are now working at the DOE.

And MIT research is having a real impact, he added: Among the 17 initial advanced research projects that the DOE funded when Danielson started working there, one was by Donald Sadoway, the John F. Elliott Professor in Materials Science and Engineering, to produce a low-cost liquid battery for utility-scale storage. While this “phenomenal idea” was first seen as risky and speculative, it has now led to the creation of a company that has raised $50 million and will be installing its first grid-scale batteries this year.

Robert Armstrong, the Chevron Professor in Chemical Engineering and director of the MIT Energy Initiative, kicked off the conference by summarizing a dual challenge: On the one hand, he said, the world is expected to need to double its energy supply by 2050, thanks to a combination of population growth and rising standards of living in developing nations. At the same time, there is a need for drastically reduced emissions from present energy-supply systems, which are still mostly based on fossil fuels. “How do you double the output and still reduce carbon emissions?” Armstrong asked.

He suggested five broad areas that could contribute: a major growth in cheap, reliable solar energy; better methods for storing energy; improvements in the adaptability and reliability of the electric grid; increased use of nuclear energy; and the development of affordable and dependable carbon-capture and sequestration systems.

The conference featured addresses by energy executives including Thomas Siebel, founder and CEO of C3 Energy; William Colton, vice president for strategic planning at Exxon Mobil; Ahmad Chatila, president and CEO of SunEdison; Dirk Smit, vice president of exploration technology research and development at Shell Oil; and William von Hoene Jr., chief strategy officer of Exelon Corp.

Seibel, who started C3 Energy after founding and running a large software-services company, said he sees enormous opportunities in applying big-data analytics to the ever-growing data generated by energy-monitoring systems such as smart meters and home-automation devices. “The data rates are staggering,” he said; his company has developed analytical systems that can save utilities $300 per meter, on average, by improving efficiency in power distribution based on real-time usage information.

But Seibel also pointed out the vulnerabilities inherent in the nation’s electric grid, which he called “the largest and most complex machine ever built. … It’s a large problem, and nobody’s going to do anything about it until something breaks.”

Chatila, who left the semiconductor industry to lead solar-energy company SunEdison, said the big challenges for renewable energy are cost and intermittency. But he also finds cause for optimism in the extraordinary, often-unanticipated breakthroughs in the world of microprocessors. Over the last two decades, he said, the cost of solar cells has dropped from $10 a watt to $0.35, a trend he expects to continue. Within a few years, Chatila said, solar will be the lowest-cost option for new electric production, without ongoing subsidy. In fact, he said, that’s already the case in some places, such as in much of India.

Meanwhile, there are enormous opportunities for savings based on using the energy we produce more efficiently, several speakers said. Harvey Michaels, a research scientist and lecturer on energy efficiency at the MIT Sloan School of Management, said that efficiency upgrades such as better home insulation and more efficient appliances and machinery represent a $25 billion field and an “untapped potential resource.”

“When energy prices go through the roof, it’s a lot easier to sell,” Michaels said. “But we’re not going to have that for quite a while.”

Around Campus
ClimateWire, via Scientific American

ClimateWire article: Plans to clean up China's air may increase emissions of carbon dioxide.

ClimateWire via Scientific American

China's efforts to improve urban air quality are often viewed as a helper for fighting climate change, but a new joint China-U.S. study says otherwise.

The study—carried out by researchers at the Massachusetts Institute of Technology and Tsinghua University in Beijing—was released last week. It shows that China's strategies for cleaning up air do not necessarily lead to carbon dioxide emissions reductions. Sometimes, according to the study, the efforts could actually increase emissions.

The study came as cleaning up air climbed to near the top of China's policy priorities, especially with record air pollution levels in 2013. The smog triggered unprecedented public outcry that motivated Chinese leaders to declare a "war on pollution."

China rolled out its Air Pollution Action Plan, which calls for limiting coal to 65 percent of the primary energy mix and prohibiting any increase in coal use in three major urban regions along the coast. In addition to displacing coal, the plan also promotes the installation of desulfurization, dust-removal equipment and other pollutant treatment technologies in industrial boilers, furnaces and power plants, particularly those close to cities.

"The urgency with which Beijing is tackling air pollution is certainly positive, and these efforts will also have related benefits in curtailing carbon dioxide emissions—to a certain extent," the report said. "But it would be a mistake to view the current initiatives on air pollution, which are primarily aimed at scrubbing coal-related pollutants or reducing coal use, as perfectly aligned with carbon reduction."

That is because once low-cost opportunities to reduce coal are exhausted, the continued displacement of coal from China's energy mix will become more expensive. If the focus remains narrowly on air quality, the researchers say, Chinese power producers will likely stick with end-of-pipe solutions—such as scrubbing pollutants from the exhaust stream of coal power plants—rather than switching to use more renewable energy.

That, in turn, slows down China's green transition in energy structure. Worse yet, according to the researchers, if the pollution-scrubbing technologies are running on coal-generated electricity, the use of them could increase carbon emissions, even as air quality improves.

Read the full article on Scientific American

Video

John Marshall, the Cecil and Ida Green Professor of Oceanography in MIT’s Department of Earth, Atmospheric and Planetary Sciences and the Director of MIT's Climate Modeling Initiative, spoke with the MIT Club of Northern California about the role oceans play in global climate change.

Around Campus
MIT News

New research from MIT and the Woods Hole Oceanographic Institute reveals a hidden deep-ocean carbon cycle.

Cassie Martin | Oceans at MIT

Understanding how oceans absorb and cycle carbon is crucial to understanding its role in climate change. For approximately 50 years, scientists have known there exists a large pool of dissolved carbon in the deep ocean, but they didn’t know much about it — such as the carbon’s age (how long it’s been in organic form), where it came from, how it got there, and how long it’s been there, or how these factors influence its role in the carbon cycle.

Now, new research from scientists at MIT and Woods Hole Oceanographic Institute (WHOI) provides deeper insights into this reservoir and reveals a dynamic deep-ocean carbon cycle mediated by the microbes that call this dark, cold environment home. The work, published in Proceedings of the National Academy of Sciences, suggests the deep ocean plays a significant role in the global carbon cycle, and has implications for our understanding of climate change, microbial ecology, and carbon sequestration.

For years, scientists thought that carbon of varying ages made up the deep-ocean reservoir and fueled the carbon cycle, but nobody could prove it. “I’ve been trying for over 20 years,” said Daniel Repeta, a senior scientist in marine chemistry and geochemistry at WHOI and co-author of the study. “Back then we didn’t have a good way to go in and pull that carbon apart to see the pieces individually. We would get half-answers that suggested this was happening, but the answer wasn’t clear enough,” he says. With the help of Daniel Rothman, a professor of geophysics in MIT’s Department of Earth, Atmospheric, and Planetary Sciences (EAPS), and Chris Follett, a postdoctoral associate in Mick Follows’ group and formerly of Rothman’s group, Repeta would soon find the answers he was looking for.

Follett thought a next-generation method called a step-wise oxidation test might be able to reveal the age distribution of the carbon pool. The team exposed water samples taken from the Pacific Ocean to ultraviolet radiation, which converts organic carbon to carbon dioxide. The gas was then collected and measured for radiocarbon, which Follett used to estimate the carbon isotopes’ ages and cycling rate. “At a minimum there are two widely separated components — one extremely young and one extremely old, and this young component is fueling the larger flux through the dissolved carbon pool,” he says.

In other words, the youngest source of dissolved organic carbon in the deep ocean originates from the surface, where phytoplankton and other marine life fix carbon from the atmosphere. Eventually these organisms die and sink down the water column, where they dissolve and are consumed by microbes. Because it takes 1,000 years for the ocean’s surface waters to replace bottom waters, scientists thought the few-centuries-old carbon, some of which is anthropogenic, couldn’t possibly contribute to the deep ocean pool. That’s no longer the case.

The researchers found that as particulate organic carbon sinks through the water column and dissolves, some of it is sequestered in the reservoir and respired by microbes. The results suggest a more active carbon cycle in the deep ocean bolstered by bacteria that utilize the reservoir as a food source. “We previously thought of the deep ocean as a lifeless and very slow system,” Repeta says. “But those processes are happening much faster than we thought.” If this microbial pump is in fact more robust, then it gives more credence to the idea of using the mechanism to sequester carbon in the deep ocean — a concept some scientists have been working on in recent years.

While some carbon in the reservoir may cycle faster, older carbon cycles much slower. This is because older sources such as hydrothermal vents, methane seeps, and ocean sediment produce carbon that isn’t easily consumed. However, these sources are often disregarded in analyses of the marine carbon cycle because they are considered too small in magnitude to be significant. But when Follett accounted for them in calculating the reservoir’s turnover, or the time it takes for carbon to completely cycle, what he found was astounding. The turnover time of the older portion of the reservoir is 30,000 years — 30 times longer than it takes for the ocean itself to cycle — which indicates these sources may be relevant. “To find something that is more consistent with the biochemical story was fun and surprising,” says Follett. “A lot of people have proposed these ideas over the years, but they haven’t had the evidence to back them up. It was nice to come in and give them the evidence they needed to support these ideas.”

So what do these findings mean for the climate system? In the short term, not much. But on a longer time-scale, one that spans thousands of years, it could affect projections of the amount of atmospheric and sequestered carbon. “It potentially has a very important influence on climate through its role in sequestration of carbon away from the atmosphere,” says Mick Follows, an EAPS associate professor in the Program of Atmospheres, Oceans, and Climate, who was not involved in the study. “If some radical change occurred that changed the nature of that pool, then it could have an effect on climate through greenhouse gas’ influence on the atmosphere.” Such changes might include, for example, deep-ocean temperature fluctuations affecting microbial activity, or a shifting surface ocean environment that could affect plankton and other organisms from which dissolved organic carbon originates.

“One of the things I’ve taken away from the work is that in a way, they’ve transformed a view of how people are thinking that pool is turning over in the deep ocean and what the sources of that are,” says Follows. “It seems like a very profound change in our understanding of how the system works relative to ingrained perspectives.”

Commentary
USA Today

We can expect oil prices to remain low for the for the forseeable future, writes John Reilly in a column for USA Today.

John M. Reilly | Co-director, MIT Joint Program on the Science and Policy of Global Change

Worldwide, people are learning to live with less gas, but that's a habit hard to keep.

The price of oil has fallen nearly 60% since peaking in June, and lately there's been a lot of ink and pixels devoted to the question of whether oil prices will plunge even further or whether they will shoot right back up. An even bigger issue is whether prices will stay at these very low levels.

While I doubt oil prices will fall much more — how much further could they reasonably tumble? Perhaps another $20 or so? — history suggests we can expect prices to remain low for the foreseeable future. What's playing out right now in the oil market is likely the same supply-demand dynamic we've seen over and over: several years of extremely high oil prices followed by decades of low prices. The twin oil shocks of the 1970s, for instance, resulted in 20 to 25 years of low prices.

Of course, things are different today — but not that much different. Over the past six or seven years, oil has been relatively expensive, often trading at over $100 a barrel. During that time, both the supply and demand sides of the equation have responded.

On the supply side, high prices have spurred investment in oil and gas exploration. Even as OPEC (Organization of the Petroleum Exporting Countries) has maintained steady production, the U.S. is experiencing a drilling revival and the shale industry is booming. Oil production is up in other countries, too. Canada has boosted its crude oil production, as have Russia and Libya. With more oil on the market, prices predictably have fallen.

On the demand side, many developed countries — including the U.S. — are using less oil.Policies such as CAFE (Corporate Average Fuel Economy) have helped improve the fuel economy of cars and light trucks. Consumers, meanwhile, have recently demanded higher-mileage cars.

There are sociological forces driving oil prices down, as well. For instance, people are more likely to live in cities (rather than car-critical suburbs) and choosing to walk or bike more.

The upshot is that unless the world changes dramatically, we should expect oil prices to remain low for at least the next 10 years. On the supply side, investments in production will continue to bear fruit; and history suggests the forces on the demand side will play out for another decade or two — or at least until people forget about high prices.

Read the full article here.

Around Campus
MIT Energy Initiative

As one of the ten panels open to the public at the upcoming MIT Energy Club Conference, MIT energy economist Christopher Knittel will explore the future of shale gas with fellow experts in the field.

Francesca McCaffrey | MIT Energy Initiative

Leaders from the energy industry, government, and the scientific community will gather to discuss the world’s most pressing energy challenges at the annual MIT Energy Club Conference, to be held February 27-28 on the MIT campus. Developed and organized entirely by MIT students, the conference is this year celebrating its 10th anniversary. 

Christopher Knittel, MIT’s William Barton Rogers Professor of Energy and a Professor of Applied Economics at the Sloan School, gave MITEI a preview of the panel he’ll be moderating on the opening day of the upcoming MIT Energy Conference.  Called “Unconventional resources: successes and challenges,” Knittel will be focusing on the future of shale gas development in the U.S. and globally.  

Here, Knittel shares some brief advance insight about the future of shale gas development.

Q: Does Shale Gas really provide the largest share of US Natural Gas Production?

Knittel:  According to the EIA, 40% of US natural gas production, in 2012, came from shale resources. Over 60% of our natural gas comes from shale basins and what is known as tight reservoirs, which typically uses the same drilling techniques as shale natural gas.

Q: Why has shale gas experienced such a boom in the US?

Knittel: The short answer is that there has been a tremendous amount of technological progress in the ability to extract natural gas (and oil) trapped in shale and tight formations. Geologists have known for years that hydrocarbons were trapped in shale basins, but not until the development of horizontal drilling, along with hydraulic fracturing techniques have energy companies been able to economically recover these hydrocarbons. 

Q: In a recent MIT News interview, you discussed how a central tenet of the EPA’s forthcoming Clean Power Plan involves shifting states from coal to natural gas. What role do you see federal regulation playing in the future of shale gas in the US?

Knittel: There are a number of important roles for federal regulation. First, if left alone the market will not lead to enough shifting away from coal to natural gas. This is because a number of "externalities"—social costs associated with burning fossil fuels that are unpaid by consumers and firms in these markets—that exist in fossil fuel markets. This is the role that policy makers must take in these markets. Ideally, we would have a set of pollution taxes, not just a carbon tax, but also taxes on particulate matter, mercury, etc. Because these types of policies tend to be politically infeasible, politics drive us to policies like the Clean Power Plan. 

Q: Will the advent of shale gas have an impact on the development of alternative energy sources?

Knittel: Anything that lowers the prices of fossil fuels will slow down the development of alternative energy sources. This is true not just for natural gas, but also for oil. Lower natural gas prices make solar and wind technologies more expensive on a relative basis. Similarly, the drop in oil prices will make it more difficult for alternative fuel vehicles to compete in the market place. In the more long term, these lower fossil fuel prices will reduce R&D into these alternative technologies. 

Q: What are some of the key political and environmental issues that shale gas producers face?

Knittel: In the US - Any drilling activity has environmental risks. Hydraulic fracturing is no different. In addition, the added step of pumping millions of gallons of water down into the well creates a new set of environmental risks. Furthermore, natural gas leaks, known as fugitive emissions, are a much more potent greenhouse gas compared to carbon dioxide. It is important for the Federal EPA and state-level EPA to assure that best business practices are used in drilling for natural gas and that these practices, as well as fugitive emissions, are adequately monitored. 

Q: Do you expect shale gas to truly be a “bridge” fuel, used only as a temporary solution while renewable energy technologies improve, or do you think that shale gas will hold a lasting spot in our energy ecosystem?

Knittel: This will depend on policy. Policy makers must create a set of incentives that will move markets away from natural gas and into renewable technologies. Absent these, natural gas may push coal out of the market and remain the main fuel source in electricity markets. 

Professor Knittel will continue the discussion on shale gas development with panelists Paul Sheng, Director at McKinsey & Co, Jan Erik Johansson, Principal Consultant at TCS, and Helen Currie, Senior Economist at ConocoPhillips, on Friday, February 27 at 2pm. Afternoon lectures on Friday will be held at the Marriott.

To attend Professor Knittel’s panel, or to view the rest of the conference agenda and reserve your ticket, visit the MIT Energy Club Conference website by clicking here.

In The News
MIT News

MIT graduate students brush up on the fundamentals of climate science and policy

  Paul Kishimoto

 

Photo: Paul Kishimoto

 

by Audrey Resutek | MIT Joint Program on the Science and Policy of Global Change

Graduate students from the Joint Program on the Science and Policy of Global Change taught a series of classes in January as part of MIT’s annual Independent Activities Period (IAP) that were designed to bring students and community members up to speed on basic climate science, climate policy, and the state of international climate negotiations.

International climate action

Amanda Giang, a graduate student in the Engineering Systems Division, led a session on January 30 on recent climate negotiations. Climate change is a vexing international problem in part because it is a commons problem—a type of problem which many graduate students may already be familiar with, she said.

A dirty kitchen is an example of a commons problem, said Giang, who has roommates. “We all share the kitchen, so it’s in no one’s best interest to clean the kitchen alone. If I clean the kitchen myself, I have to do all the work while everyone gets the benefit. But if no one cleans the kitchen we all suffer. What we really need is some sort of coordinated collective action, where I take out the trash and my roommate does the dishes.”

Because of this, an international agreement is the best route for action. Giang reviewed the recent history of global climate negotiations, including the UN’s efforts leading up to the next round of talks in Paris this winter, where countries are expected to come to an agreement on post-2020 climate action. Giang also discussed existing greenhouse gas mitigation efforts in the US and China, and the recent emissions deal between the two countries.

Economic measurements

Paul Kishimoto, a graduate student in the Engineering Systems Division, led sessions on January 29 and January 30 on the economics of climate change and climate policy.

Economists measure the effects of climate change as costs, both direct and indirect. As an example, Kishimoto asked the class to consider how statistically warmer weather might affect a runner who goes jogging on the Charles. If the runner goes jogging when it’s too hot and gets heat stroke and has to go to the hospital, it is a cost directly related to climate change. If the runner avoids running and misses out on an activity that they would otherwise do, it’s counted as an indirect, or counterfactual, cost of climate change. Calculating both the costs of climate change and the costs of policies allows researchers to evaluate the effectiveness of policies addressing climate change, he said.

 

Amanda Giang

Photo: Amanda Giang.

 

 

Kishimoto also discussed how different types of policies aimed at reducing greenhouse gas emissions work, including measures like carbon taxes and trading plans, regulations, and policies encouraging research and development of new technology.

Climate science measurements

Daniel Gilford and Jareth Holt, graduate students in the Department of Earth, Atmospheric and Planetary Sciences, led a session on January 29 on how climate scientists measure climate change.

Gilford started the class out by explaining the concept of radiative forcing, which is a measure of the net difference between the energy the Earth and atmosphere absorbs from sunlight, and the energy released back into space after a change in the atmospheric composition (such as increasing CO2). A change that traps more heat in the Earth system is a positive radiative forcing and contributes to warming. The primary gas causing increased radiative forcing is CO2, but other gases like methane, nitrous oxide, and ozone also play a role.

Jareth Holt discussed how climate models account for factors that affect radiative forcing. To do this, models have become more complex, Holt said. For example, in the 1990s, climate models underestimated the importance of aerosols in calculating radiative forcing, and had simple representations. Models now have more detailed representations of how aerosols behave in the atmosphere.

On the other hand, there are reasons why researchers might want to simplify models. Modern climate models use supercomputers, he explained, and can take weeks or even months to make one simulation. Simpler models run more quickly, and allow researchers to complete a larger number of simulations, helping to understand the uncertainty in the climate system. As a result, climate modeling requires constant balancing between complexity and computational efficiency.

Climate fundamentals

Daniel Gilford and Jareth Holt led a session on January 26 covering basic climate science, and the history of the discipline. Climate science, Holt said, is the study of variability, patterns, and statistics over time.

  Daniel Gilford

 

Photo: Daniel Gilford.

 
 

The field can trace its roots back to the 1820s, when Joseph Fourier discovered that the Earth’s atmosphere traps heat. The modern study of climate change got its start in the 1890s when Svante Arrhenius built the first simple model balancing energy in the Earth system. He determined that adding CO2 to the atmosphere traps energy, causing warming, which is a principle still used by climate scientists today.

Gilford and Holt also explained what makes a gas a greenhouse gas. The Earth’s atmosphere is made of mostly nitrogen and oxygen, but those gases absorb almost none of the energy given off by the Earth’s surface. Instead, small amounts of other gases, like water vapor and CO2, trap the most energy. Other gases like methane and nitrous oxide are present in even smaller amounts, but because they strongly absorb energy at different wavelengths than CO2 and water vapor, they can also contribute dramatically to warming.

For the full list of 2015 Global Change IAP Clasess click here

In The News
Boston Globe

MIT Prof. Paul O'Gorman talks with the Boston Globe about how climate change could affect snowfall.

By Carolyn Y. Johnson | Boston Globe

When a historic blizzard dumps a record-breaking amount of snow on the region, it’s only a matter of time before someone ventures a wry joke about climate change. Maybe there’s an upside to a warmer world, after all? Less shoveling.

But the halfhearted punchline doesn’t hold up to scientific scrutiny, according to recent research from a Massachusetts Institute of Technology atmospheric scientist. In fact, a warming world could mean less overall snow in a given year, but no reprieve from extreme snow events, at least in places like Boston.

To science, not all snowstorms are the same: average snowfall is likely to decrease in most places, but the most aggravating, traffic-snarling, work-stopping, back-straining extreme storms like the one that just buried Boston could actually get bigger.

“Most studies have been about how much snow falls in a season or in a year and call that average snowfall. But of course, in terms of disruption to society or economic disruption, we’re also interested in heavy snowfalls,” said Paul O’Gorman, an associate professor of atmospheric science at MIT who published his findings in Nature. “In some regions, fairly cold regions, you could have a decrease in the average snowfall in a year, but actually an intensification of the snowfall extremes.”

O’Gorman published his findings last August, back when snow was far from the front of mind. He is currently in Australia, where the weather is sunshine and showers instead of snow, but took the time to answer a few questions by email about his counterintuitive finding.

Q: Can you explain how a warming climate might affect snowfall?

A: There are two competing effects as the climate warms: the increasing temperature causes a changeover from snow to rain, but it also increases the amount of water vapor in the atmosphere. For a particular place and time of year, which effect wins out depends on the temperature to begin with.

Read more...