Q: We recently worked on a paper which you led about testing of geoengineering, or these sunlight reflection approaches to geoengineering. So what was the main conclusion of this paper?
A: Let me actually say two things. One is that the easiest way to test would actually be to put in some modulated climate signal with sunlight reduction methods. So changing from year to year how much forcing you’re putting in, and then looking at the correlated response. And if you do that, there’s two main things to ask. One is that the climate doesn’t respond the same way to that type of forcing as it does to a sustained forcing. And the second question is just one of signal to noise. It’s the tradeoff between how large a signal did you have to put in versus how long are you willing to wait and what accuracy you’re interested in.
Q: Let me get this right. So one way you can see the response is by either having a square wave or some kind of pulsed forcing at some frequency, and then you see how the climate system responds to that frequency. So that gets at some signal to noise issue. But then you’re telling me that, “Oh, but if you do that, the climate response you see is not going to be the same as the climate response of a sustained deployment.” Let’s say you do this pulsed thing. How do you go about predicting what would actually occur in a sustained deployment?
A: I think the answer there is, there are certainly things that you will learn from doing this pulsed thing, but you’re still going to have to extrapolate using models to what you think is likely to happen to a sustained deployment. And the same issue is true if you’re, for example, using natural experiments. So Mt. Pinatubo erupts and it injects a certain amount of aerosol, and we estimate what the global mean temperature effect is, and we look at the changes in rainfall that result. And we would still need to extrapolate from those estimates to what we’d expect to see for a sustained deployment as well.
Q: It sounds like that, let’s say if you had a few decades and you were willing to have something like one-quarter of the radiative forcing of a CO2 doubling that you could find out to some reasonable degree of accuracy what the global mean response is on that couple of year time scale.
A: That’s absolutely correct.
Q: But then you still have this issue of how do you extrapolate that to the effects of a sustained deployment. This sounds like, yes, you can test some things, but the idea that you’re really going to know in a test what’s going to happen in a real deployment, there’s always going to be limited--
A: That’s always true. I think that’s true any time in any field of engineering or human endeavor. But a test is never exactly the same as the situation of the reality that you’re interested in exploring. And so it’s really just a question of, is the test useful? And can you learn enough to make doing the test worthwhile?
Q: It sounds a little bit like models, right? It’s like all tests are imperfect, but some tests are useful.
Q: So the idea is that you can do a test, it’s useful, but it’s not going to tell you everything you want to know.
A: So let’s consider the situation where we’re several decades from now, and we believe that the consequences of not doing geoengineering are substantial and we have models that predict what we might want to do about geoengineering, what might happen with geoengineering. Would it then be useful to do some testing to help improve our knowledge? So we’re not going to learn everything from a test, but we can learn things from a test and validate the models so that we’re not purely relying on models.
Q: Another approach would be some sort of hybrid approach where you’re testing processes, right? So there might be some tests that had to do with particular aggregation and dispersion and maybe some other tests having to do with the effects on stratospheric chemistry, but it wouldn’t really be a full test of the full climate response. So you just try to build up confidence in model parameterizations and then hope that the model is right for the bigger climate scale stuff.
A: Right. And one could also test one’s models by making sure that they replicate the next volcano that goes off, for example, that we’re able to predict that the response that we see in a model gives us an awful lot of confidence that the models would be right.
Q: Obviously, these things differ in risk, but it seems that, for example, if we cut CO2 emissions dramatically today or we cut the Chinese sulfur aerosols coming out of the power plants that you could say, “Well, we don’t really know for certain what the climate response to that is, and the only way that we’re really going to know is by doing it.” But yet we do feel like we have some predictive capability in those cases. Is it a parallel situation?
A: It’s a parallel situation. And certainly cutting those things would be a good thing. I think you could also say we don’t really know exactly what the consequences are of not cutting CO2 emissions. I guess there are two things that are different with aerosol forcing. One is that we do have this opportunity to--if we were serious about doing this--about actually shaping the signal to maximize information. And the other is that morally it’s a little bit different to looking at, say, what are the consequences from something I’m intentionally doing to the climate? We want to be really sure that we’re not going to do more harm than good by using geoengineering.
Q: The Montreal protocols, which led to reduction of CFCs in order to protect the ozone in the stratosphere, in a way, that was a model derived or at least a lot of the basis for that treaty were model predictions that if you continue emitting the CFCs were going to destroy the ozone layer. And also the benefits of reducing CFC emissions largely came from models. So there is a history--I don’t know if this is a question--but there is a history of doing things based on models. You could say that the whole--or not the whole, but a lot of the concern about global warming is based on model predictions, and we want to avoid these things that the models predict.
A: I’m not opposed to model predictions. If you were to ask me my honest opinion is, now that we really understand just how long it would take to test geoengineering, my guess would be that the only time we’re ever going to really use geoengineering is if we get ourselves into a situation where not doing geoengineering is so bad that we’re willing to accept the risks from doing geoengineering and will simply hope that our models are correct and trust that our models are correct, and we’ll probably wind up using it without ever testing it.
Q: Yeah. My sense is--I don’t have a crystal ball to see the future--but it seems like the result of your study is that, yes, if you did something fairly substantial over a number of decades, you could learn some valuable information. But I would think politically, logistically, the likelihood that you’re really going to have a subscale deployment that’s pulsing aerosols for a few decades as a global experiment seems to me a bit unlikely.
A: I can’t imagine that ever happening.
Q: Maybe this statement about you can’t test the global scale and regional scale climate effects of solar radiation management that maybe in an absolutist sense that might be an incorrect statement. But in terms of what might be practically feasible to do, it might largely be right. Right?
A: I think this is why if you actually read the paper, there’s nowhere in there--the title of the paper is “Can We Test geoengineering?” But there’s nowhere in there as an explicit statement that says yes or no to that question. Strictly speaking, of course, you can test it; but practically speaking, I don’t think that’s ever going to happen.
Q: It reminds me of Marchetti’s paper that first introduced the word geoengineering. If I recall correctly, I think that word appears in the title of his paper, but never in the body of the paper. So maybe there’s a tradition that we’re carrying on here.
A: I’m happy to be in the grand tradition. [laughs]