In a recent Clubhouse session on carbon offsets, an audience member asked the panel about the "single most important thing" that could help increase confidence in the whole idea of carbon offsets. My "single most important thing" was to recognize that testing carbon offsets for environmental integrity involves the same kind of hypothesis testing challenges that we encounter in all kinds of situations, from pregnancy testing to guilt and innocence determinations in the judicial system. Because this point is so rarely recognized or discussed, I'll explore it further below.
You may have previously seen my pregnancy testing analogy reflected in this slide:
But in many respects, comparing carbon offsets to guilt or innocence determinations in the judicial system is even better. With pregnancy tests, you'll eventually know whether the test was right or wrong. With trial outcomes, in many cases you can simply never know for sure -- just as with carbon offsets.
So here's where the hypothesis testing comes in. I'll use the judicial system because it will be intuitively obvious to everyone.
Any trial involves three potential outcomes: the "right" verdict, an incorrect finding of innocent, or an incorrect finding of guilty. Convicting an innocent person is an example of Type 1 error or a false positive. Clearing a guilty person is an example of Type 2 error, or a false negative.
Offset testing similarly has three potential outcomes: the "right" conclusion, an incorrect finding of environmental integrity (Type 1 - false positive), and an incorrect finding of no environmental integrity (Type 2 - false negative).
The real question is how the judicial system balances false positives and false negatives. In the slide below, you can see that it would certainly be possible to have roughly equal numbers of false positives and false negatives, depending on how you set standards of evidence, etc.
In reality, of course, the "beyond a reasonable doubt" required for conviction is intended to ensure that there are far more false negatives (guilty people walking free) than false positives (innocent people going to prison) arising out of trials. That's because as a policy matter we're more worried about convicting the innocent than letting the guilty go free.
It's worth noting two things here:
- If we simply assumed that we could reliably tell who was telling the truth and who was lying in a trial, and did not internalize the idea of "beyond reasonable doubt," FAR more innocent people would be convicted. It's the recognition that we're not very good at detecting who's telling the truth, and that we're engaged in guilt and innocence hypothesis testing, that led to the "beyond reasonable doubt" criterion being put into place.
- It's impossible to solve for false positives and false negatives simultaneously. Anything you do to reduce false convictions will increase the number of guilty getting off, and vice versa. Remember this!
What about when it comes to carbon offsets? It is truly enticing to believe that we can reliably distinguish between "good" and "bad" offsets, and move on from there. I hope it is now clear why that is as much of a pipedream as reliably distinguishing between who is telling the truth and who is lying in a trial.
No matter what tests we develop, there will ALWAYS be false positives (reductions and sequestration inappropriately allowed into the offset pool), false negatives (reductions and sequestration inappropriately denied entry into the offset pool), and "real offsets." And unfortunately, false positives will always be inversely related to false negatives as shown below. It's worth nothing that because a lot of false negatives decreases offset supply and increases prices, there is always pressure to limit false negatives. What does that inevitably mean? Yup!
If we don't approach offset testing from a hypothesis testing framing, and if we don't explicitly think about how we're going to prioritize between false positives and false negatives, we are likely to end up with FAR more false positives than if we did approach offsets as a hypothesis testing challenge.
This reality is reinforced by the graphic below, a back of the envelope calculation of two billion tons of potential emissions reductions and carbon capture in the U.S. that would constitute false positives if approved as offsets because they have nothing to do with the existence of a carbon market.
Remember the definition of carbon offset additionality (and note that the same definition applies to "carbon capture tons."
"Additional" emissions reductions for carbon offset purposes are reductions that can be traced back to the existence of, operation of, and/or financial incentives created by a carbon market, whether voluntary or regulated.
The bottom line is that there are BILLIONS of tons of pre-existing "emissions reductions" and HUNDREDS OF BILLIONS of tons of pre-existing "carbon sequestration" that would constitute false positives in a carbon offset system (and thus not advance climate change mitigation objectives).
Why hundreds of billions of tons? Because hundreds of billions of tons cycle between the atmosphere and the biosphere every year, going into trees, soils, the oceans, etc. Without a robust effort to distinguish between these "already happening" sequestration tons and "additional" tons, any market will be swamped with "already happening" tons and lead to no climate change benefit.
In this article I'm not referring to any particular offset or category of offsets. I'm not arguing the evidence of whether offset markets are or aren't actually dominated by "false positives." I'm simply pointing out that if we never even recognize the nature of the hypothesis testing challenge when it comes to carbon offsets, we're likely to end up with a system with FAR less environmental integrity than if we had asked the right questions to begin with.
So what's the "single thing" that would make the most difference to offset credibility? Let's recognize that we're engaging in hypothesis testing that requires serious attention to the potential for false positives and false negatives, and how to balance them. It requires asking the right questions, and not just assuming we're doing the right thing.
This piece originally appeared on LinkedIn. Feature image by Gerd Altmann from Pixabay.