In the unlikely event that you have not yet experienced your daily dose of despair concerning the fate of humanity, then I’d highly encourage you to read Elizabeth Weil’s ProPublica piece “They Know How to Prevent Megafires. Why Won’t Anybody Listen?” The article makes two basic points. 1) Extensive controlled burns would be an effective precautionary strategy that would prevent recurring megafires. 2) There are political and financial incentives which trap us into a reactionary rather than precautionary fire strategies.
There are clearly lots of perverse incentives at play, but one part of the article was especially interesting:
“How did we get here? Culture, greed, liability laws and good intentions gone awry. There are just so many reasons not to pick up the drip torch and start a prescribed burn even though it’s the safe, smart thing to do. . . . Burn bosses in California can more easily be held liable than their peers in some other states if the wind comes up and their burn goes awry. At the same time, California burn bosses typically suffer no consequences for deciding not to light. No promotion will be missed, no red flags rise. ‘There’s always extra political risk to a fire going bad,’ Beasley said. ‘So whenever anything comes up, people say, OK, that’s it. We’re gonna put all the fires out.'”
It is risky to engage in controlled burns. Things can go wrong, and when they do go wrong it could be pretty bad, someone could lose their home, maybe even lose their life. Of course, it is far riskier, in one sense, to not engage in controlled burns. So why, then, our incentives set up the way they are?
At least two different explanations are likely at play.
Explanation 1: Action vs Inaction. First, in general, we are more responsible for actions than for inactions. The priest who ‘passed by the other side’ of a man left for dead did something terrible, but did not do something as terrible as the thieves who beat the man up in the first place. As a society we jail murders, we don’t jail the charitably apathetic, even if the apathetic are failing to save lives they could save.
And indeed, this point does have an appropriate corollary when talking about fire suppression. I am not responsible for houses burning in California — this is true even though last spring I could have bought a plane ticket, flown to California, and started burning stuff. Had I done so, likely things would have gone terribly wrong, and in that case I really would have been responsible for whatever property I had destroyed. This seems appropriate, it could be catastrophic if my incentives were structured such that I was punished for not starting vigilante fires.
Elizabeth Anscombe gives us a similar example. If the on-duty pilot and I are both asleep in our cabins, then we are doing the very same thing when our ship hits an iceberg. Yet it was the pilot, and not I, who sunk the ship. Indeed, had I, a random passenger, had tried to navigate the ship we would have absolutely held me responsible when something goes wrong.
So, what is the principle here? Is it that amateurs are specially responsible for actions? No, because we can also identify cases where we indemnify amateurs for their actions. Perhaps the best example here is good Samaritan laws. These laws protect untrained people, like myself, if we make a mistake when trying to render emergency first aid.
What is really going on is that we don’t want passengers trying to navigate ships. Nor do we want aspiring philosophers attempting unsupervised controlled burns in California. But we do want pilots to navigate ships, and we do want burn bosses attempting controlled burns. As such, we should construct incentives which encourage that, and protect people from culpability even if things occasionally go wrong.
Explanation 2: Causal Links. Second, we trace responsibility through causality. Because you caused a house to burn down you are, at least partially, responsible for that damage. The problem is, it is almost always easier to trace causality to actions than to inactions. We can identify exactly which active burning causes damage. We can easily say, “the first you started on February 14th destroyed these two house.” It’s much harder to say “the not burning that you didn’t do on February 14th was what allowed the fire to get out of hand.”
And indeed, I think probably we can’t really hold people responsible for any particular failure to burn. We can hold people responsible for how much controlled burning they can do in general, but we can’t trace causal paths to hold them responsible for any particular bad result of inaction. Indeed, it would be unfair to do so, no burn boss can’t foresee when a particular failure to burn will destroy a house (in the way they can sometimes foresee when burning in a particular area might destroy a house). This creates a problem though. Because we can’t hold people fully responsible for their inaction, that means we must hold people disproportionately responsible for actions, thus perversely incentivizing inaction.
This also parallels our interpersonal lives. For example, we generally want people willing to think for themselves. But we are also far more likely to condemn people for reaching terrible views they came up with themselves than for failing to recognize what is wrong with the conventional view. This can create perverse incentives, however. It might really be true that we are justly responsible for coming to terrible conclusions, but because it is so hard to hold people responsible for the majority view it might be important to forgive even egregious mistakes to keep incentives favoring original thought.
So here is the general point. Assessing responsibility is far more complicated than just establishing whether someone played a causal role. Sometimes holding people responsible for things they really should not have done can create perversely disincentivize people from taking risks we want them willing to take. The fires in California give one clear example of this, but the point generalizes to our lives as well.