Kim Stanley Robinson’s Aurora and Intergenerational Responsibility

"Wormhole travel" by Les Bossinas is licensed under CC0 Public Domain (via Wikipedia)

Editor’s note: this article contains spoilers for Aurora. 

Aurora, the most recent novel by the science-fiction author Kim Stanley Robinson, focuses on the long-distance voyage of Earth’s first interstellar generation starship. A generation starship is a spaceship designed to sustain a small human population stuck on the ship for several generations. This generation ship is travelling 11.9 light years to the Tau Ceti system, a voyage that takes them roughly 200 years to complete. Thus, the inhabitants of the ship that we meet are not those individuals who signed up willingly for the expedition, but rather the descendants of their descendants.

The ship’s population faces several major complications on their voyage, including the never-ending deterioration of centuries-old ship parts, ecological imbalances in their self-contained ecosystems and zoo devolution—which is the genetic regression in traits such as intelligence that occurs with each successive generation. Though the would-be colonists do successfully reach their destination, they find their chosen planet uninhabitable due to a small virus-like entity that is widespread across the planet and lethal to human beings. The starship inhabitants are forced to decide: Do they attempt to return to Earth, or do they attempt to salvage the situation by dealing with the threat or finding another suitable planet to colonize?

The novel deftly explores political and ethical conundrums involving ecological sustainability, the challenges of democracy, and the human yearning for personal freedom. The central premise itself raises a hard ethical question: did the original inhabitants of the ship have the right to decide that their future generations would be born, live out their lives, and die in such harrowing conditions as they would inevitably find themselves on such a difficult and long voyage? One of the main characters, Devi, laments, “We’ve been rats in a cage, two thousand at a time for seven generations, and for what? For what?” Many other inhabitants on the ship also question the wisdom of building the generation starship in the first place. This issue is timely for us. Not only will our species likely face the same question in the future if we decide to attempt interstellar colonization, but the current threat of climate change poses similar challenges. Is the current generation morally responsible for making sacrifices in economic growth and living standards so that future generations have livable climates to inhabit on this planet?

Philosophical interest in questions of intergenerational responsibilities has focused on solving the so-called Non-Identity Problem. We often explain the wrongness of an action by pointing to a person harmed or who will be harmed by the action. If our actions harm no one, including ourselves, then it doesn’t seem that those actions could be wrong.  Some decisions we make now may harm people who do not currently exist, but will exist in the future. Our decision to continue to burn fossil fuels in large quantities will harm future people by making their climates hotter—this may decrease their crop yields or increase the frequency of harmful violent weather patterns, to give some examples. It is easy to call this a moral wrong if we can be sure that some future people will be harmed. However, many of the actions we take now, especially the big ones like public policy decisions, will help determine who will be born and who won’t be born. For example, if we decide to reduce our fossil fuel consumption by limiting population growth, then people will make different reproductive choices and a different set of people will be born than if we enacted no policy at all to limit population growth. How can we say that the action we take now harms a future set of people if that very same action is the reason for those people’s existence in the first place?

The original inhabitants of the generation starship were in a similar position. The current generation on the starship, which now faces many harmful side-effects of a centuries-long space voyage, would not have existed if it were not for the original inhabitants’ decision to launch the starship. On the one hand, to knowingly place future people in a harrowing and probably futile position seems to be a moral wrong. On the other hand, it seems odd to say that people whose very existence is contingent on a decision are also harmed by that decision.

Numerous responses to the non-identity problem have been proposed. One response would be to “bite the bullet” and say that, despite our intuitions that the original decision to launch the generation starship is morally wrong, it is not. We can’t be said to harm someone by a choice if that person’s existence is contingent on the choice itself. Another response would be to deny the other intuition that moral wrongs are always wrong in virtue of the fact that some identifiable present or future person is harmed. One version of utilitarianism holds that an action is morally right insofar as it maximizes the total aggregate happiness in the world and minimizes the total aggregate unhappiness. It does not matter how happiness is maximized or which individuals are made happy (or not) by the action. The theory is impersonal. The decision of the generation starship is not morally wrong because it harms indeterminate future persons. It is only morally wrong (or right) depending on whether it was the best option for maximizing happiness and minimizing unhappiness overall in the universe. Both responses have to overcome the difficulty of explaining away our intuitions about how morality works.

There is much more to the non-identity problem than presented above in this short post. For useful sources, I recommend consulting the Stanford Encyclopedia of Philosophy entries on the non-identity problem and on intergenerational justice. If you are interested in reading a gripping science-fiction narrative that tackles the political and ethical problems raised by technological advances, I highly recommend Aurora by Kim Stanley Robinson.

Share on FacebookTweet about this on TwitterShare on TumblrShare on Google+Share on LinkedInPin on PinterestShare on RedditEmail this to someone