Mis-applications of Gaia theory

I often encounter a sort of frustrating combination of Gaia theory plus bad philosophy of language, which goes something like this:

Earth is not sick, earth has been around for billions of years, and underwent many catastrophes before us, and will undergo many catastrophes afterwards. Everything will be fine. Human activity is barely a blip on the radar of the earth’s lifespan, and we can have no serious, long term detrimental impacts. It is merely another form of human hubris to think that we can have so serious an impact on the earth.

I admit to not being totally clear as to what people who say things like this are saying. Given the global scale at which Gaia theories consider things (life, physical processes), do they consider earth to be special, or do they suppose that earth itself is just another tiny little, basically irrelevant microcosm within what is surely a Universe teeming with life? If this is so, then presumably we shouldn’t care if the Earth is destroyed altogether at any point, because life will go on elsewhere. Man’s time on earth is to Earth as a whole, as Gaia/Earth is to Gaia/Universe.

I don’t suppose the people who say that sort of thing would accept this. If it’s because they think Earth really is special, and the likelihood of life elsewhere is very low, then that’s consistent enough. If not, I’m not sure what reason they could give thinking the destruction of Earth matters.

I’m also not clear as to what they think would constitute a genuine threat to Earth. Presuming that the destruction of Earth is a bad thing (maybe that’s an invalid assumption), how severe would the threat posed by humans have to be in order for Gaia theorists to really take it seriously?

If it wiped out life for 1 million years, is that okay? What about 10 million years, or 100 millions years? It seems like my Gaia theorist (who I hope is just a strawman, but I fear is not) is committed to there being some amount of destruction that would be intolerable, and just thinks it it unlikely that we will attain that level of impact. Let’s say the sun becomes a red giant in 7.5 billion years, and the earth is wiped out in 8 billion years. And let’s say that humans succeed in destroying life for the next 7 billion years in the year 2500 (improbable but not impossible), so that Earth only has about 500 million more years where it supports life. Is that acceptable, or is that “too little life”? And how would we decide? What if life is not totally wiped out, but all we have for 6 billion years is protozoa and cockroaches, acidic oceans, sulfur skies, etc. Is that bad? Is anything bad, or is that just a category that humans impose onto the world, and whether there is life or not, and what kind of life is there, and how much life there is, doesn’t matter?

If we are committed to there being a specific amount of destruction that is “too much,” as it seems my Gaia theorist is, how might we decide what that level is?

My problem is not with the idea that the Earth is a resilient system, capable of surviving great stresses, and regenerating a system capable of supporting flourishing life.

My problem is the incoherence of holding both that

  1. it is possible for something to happen to the Earth that we could legitimately call “bad”, or at least an undesirable outcome, i.e. it is conceivable that under some circumstances too much life might be destroyed; and
  2. the earth is currently just fine, and we shouldn’t worry about it, because we humans are totally incapable of causing any serious amount of damage

I would be happy if any Gaia theorist would be able to explain my confusion.

I say it is a combination of Gaia theory and bad philosophy of language because however it is put, it seems to trade on ambiguous use of words like “fine,” “sick,” “actual damage,” that the speaker wants to be able to apply both in their normal contexts (as in, we can say someone is sick even if they aren’t going to die, or were injured even if they only have a cut and will be fine), and in a sort of vague Gaia theory sense that attempts to apply them to very different contexts (large scales of space and time) without providing any sense as to how we could know whether we were using them well or badly. My point is that people who talk like my Gaia theorist does, don’t actually know what they’re saying.

Neat, plausible, and wrong

One of my favourite quotations is from H.L. Mencken:

There is always an easy solution to every human problem–neat, plausible, and wrong.

Tom Flanagan amply demonstrates the sort of reasoning at which this barb was aimed in a recent op-ed for The Globe & Mail, We don’t need a centre party to prevent polarization.

I will let his words speak for themselves:

What keeps democratic politics focused on the centre? Not the existence of a centre party but the workings of the “median voter theorem” (MVT). Think of voters as points spread out along a line – on the left, on the right, in the middle. By mathematical necessity, there is a median position, with half of voters to the left and half to the right. The median voter sits at the winning position in the democratic competition of political parties.

The proof is simple and elegant. If Party A moves to the left or right of the median, it allows Party B to locate itself closer to the majority of voters. The MVT predicts that Party A and Party B will tend to converge on the median because they cannot afford to let their rivals cut them off from more than half the voters.

The first problem is that he invokes the MVT as having some causal role here, as though it were a force moving people around rather than just a description of the phenomena. The MVT does not “keep democratic politics focused on the centre.” Any account that purported to explain such a thing would have to be vastly more complicated than this simple theorem. Its unsuitability to the task appears clearly when Flanagan notes that a move in one direction by a party “allows” the other part to locate itself closer to the majority of votes. The problem is that this “allows” not only is not “causes,” but is a stand-in for some entirely vague understanding of political strategy, and must admit of all sort of other determining factors which we don’t really have any idea of being able to outline without enough detail that we could consider the MVT to have real explanatory force here.

The other problem is that it is absurdly simplistic to lay out political view on a line. I thought first year undergrads learnt that any remotely sophisticated organization of the political spectrum does not draw its inspiration from a straight line. It’s surprising, and somewhat disappointing, that a political science professor such as Flanagan would give any credence to this approach.

This is not just the problem of relating abstract models to the real world. That is of course always a problem, as a model must abstract some things out in order to be a model and not just a copy. Flanagan correctly admits that “The MVT is a mathematical abstraction belonging to game theory, and the world is far more complicated than that,” but this makes it seem like the problem is just the traditional one of translating abstraction into real-world, concrete application. In fact the issue is that the model is just bad. And when you start with a bad abstraction, you will never get a good translation back into concrete terms.