The Earth We are Creating

: Over the past decade, a number of Earth System scientists have advocated that we need a new geological epoch, the Anthropocene, to describe the changes to Earth that have occurred since the 1800s. The preceding epoch, the Holocene (the period from the end of Earth’s last glaciation about 12 millennia ago), has offered an unusually stable physical environment for human civilisations. In the new Anthropocene epoch, however, we can no longer count on this climate stability which we have long taken for granted. Paradoxically, it is our own actions that are undermining this stability—for the first time in history, human civilisation is now capable of decisively influencing the energy and material flows of our planet. Particularly since the 1950s, under the twin drivers of growth in population and per capita income, we have seen unprecedented growth in oil use and energy use overall, vehicle numbers, air travel and so on. This unprecedented growth has resulted in us heading toward physical thresholds or tipping points in a number of areas, points that once crossed could irreversibly lead to structural change in vital Earth systems such as climate or ecosystems. We may have already passed three limits: climate change; rate of biodiversity loss; and alterations to the global nitrogen and phosphorus cycles. The solutions usually proposed for our predicament are yet more technical fixes, often relying on greater use of the Earth’s ecosystems, biomass for bioenergy being one example of this, and one we explore in this paper. We argue that these are unlikely to work, and will merely replace one set of problems by another. We conclude that an important approach for achieving a more sustainable and equitable world is to reorient our future toward satisfying the basic human needs of all humanity, and at the same time minimising both our use of non-renewable resources and pollution of the Earth’s soil, air and water.

through is a crucial point in world history. Given this, we need to be very careful in assigning such importance to our own era. After all, the first half of the 20th century witnessed two devastating world wars, the Great Depression, the birth of nuclear weapons (and their first use), computers, antibiotics, quantum physics and relativity, and much more. Furthermore, earlier civilisations, such as those in ancient Mesopotamia, and more recently the Mayan civilisation in Central America, have experienced ecological collapse. However, these were local collapses. The way in which our era is unique-perhaps the only way that it is unique-is that we risk facing global, not local environmental collapse; history might not start with us, but we may well be the beginning of its end.
Even though the Anthropocene is regarded as two centuries old, it is only since the mid-20th century that the tempo of change has dramatically speeded up. In the history of ideas, the first half of the 20th century may be eventually judged as more remarkable; however, the physical world does not respond to ideas, or to human constructs such as carbon intensity (emissions of CO 2 per unit of Gross Domestic Product). Instead, it responds, for example, to the number of CO 2 molecules in the atmosphere, or CFC molecules in the polar stratosphere. It is only since the mid-20th century that the rapid rise in, for example, atmospheric CO 2 concentrations and energy consumption began (see Figure 1) [3][4][5][6][7][8][9].
Unprecedented population growth is a major factor that underpins much of the rapid change of the past half-century. Humans of some sort have inhabited the Earth for about two million years, and modern humans for perhaps as much as 200,000 years, but for most of this period their numbers were small. Around ten millennia ago, at the beginning of agriculture, we numbered perhaps five million. At the start of the current era, the total population was about 200 million, and stayed roughly constant for a millenium. Even in 1800, global population was under one billion. In 2011, it passed seven billion, and the UN medium population projection is that by 2050 it will reach 9.55 billion [4]. The main reason for this rapid growth is the progress made in reducing infant mortality; our population size is therefore a measure of our success in reducing the controlling influence the environment has over us. For comparison, all great apes (chimpanzees, gorillas and orangutans) are thought to total less than 400 thousand today in Africa and Asia, and their numbers are rapidly diminishing. Ten millennia ago, they may well have easily out-numbered us; a few decades from now, they may no longer exist in the wild. Vaclav Smil [10] has even estimated that 10 millennia ago, humans and their domesticated animals amounted to 0.1 % of all mammal biomass. Today the figure is 90 % and as we will see, humans additionally now appropriate large proportions of terrestrial biomass and fresh water.
The fossil record suggests that the fate of all species-presumably including our own, homo sapiens?-is to become extinct. One estimate is that 99.9 % of all species that have ever lived on Earth have eventually died out, usually within 10 million years of their first appearance on our planet. Given the vast span of life on Earth, the rate is usually slow; for mammals and marine life it is estimated at about 0.1-1.0 extinctions per million species per year [1]. Yet today, largely through human actions, the rate is now 100-1000 times higher than this background rate. For many species today, their numbers in the wild are so small that functional extinction has occurred, in that they no longer play a significant role in shaping the ecosystems they live in.
As an example, consider the oceans' whales. After centuries of whale hunting, their numbers today are only a small fraction of their historical population. Since these huge marine mammals have correspondingly huge appetites, it might be thought that their demise would have helped ocean ecosystems. According to marine biologist Steve Nicol [11], such a position ignores the broader impact of their loss. These air-breathing mammals often dive deep for food, then return to the surface to breathe and in doing so they deposit their waste through the layers of the oceans. The disturbance generated from the motion of whales also helps to redistribute vital nutrients, such as iron, to the surface waters. Photosynthesis can only occur in the oceans' sunlit surface layers; the amount of life the oceans can support ultimately depends on how much biomass plants make through photosynthesis. Upon death, their corpses form the ultimate link in the chain of life. The early whaling ships' records indicate that millions of whales fed in the waters off Antarctica in the summer months. Their cumulative influence would have been large, and their near-demise has decreased fish and krill abundance in these waters. Life on Earth has seen five major extinction events (defined by paleontologists as events when Earth loses more than 75 % of its species in a geologically short interval), as well as numerous minor ones [12]. Many scientists now speak of the present-day accelerated rate of species loss as being the Sixth Extinction, setting it alongside the only five others over the past 540 million years or so. True, the Earth is thought to have as many as 40 million species, although only about 1.6 million have been described. Do the various Earth ecosystems need this profusion of lifeforms? The risk with the present rate of species loss is that ecosystem resilience will be lost, making it difficult to recover after disturbances such as drought, pest outbreaks or fire.
We now know that atmospheric CO 2 concentrations are rising at historically unprecedented rates. Today, we directly measure CO 2 atmospheric concentrations in parts per million by volume (ppm) at various remote sites, particularly at Mauna Loa in Hawaii, but we can also get accurate values going back almost a million years from the air trapped in ice core samples recovered from the Antarctic and Greenland ice caps. The deeper the ice core is extracted, the older the air trapped in the sample. From its analysis, we can obtain the composition of the atmosphere through the millennia. Until the beginning of the Industrial Revolution, the CO 2 concentration varied only from about 180 to 280 ppm, its value around the year 1800. Human activity since 1880 has lifted this value to around 400 ppm in 2014, and its annual rise continues at around 2 ppm. With this anthropogenic rise, we are now experiencing CO 2 levels the Earth has not seen for perhaps 30 million years, well before humans first existed. We have ventured deep into unchartered territory.
The growth in human population, particularly since the end of World War Two, has been accompanied by even large increases in real global Gross Domest Product, and similar rises in steel, fertiliser, energy and water consumption. Urban populations have dramatically risen, particularly in industrialising nations, with city-dwellers now constituting over half of the global population. Since 1950, motor vehicle numbers have risen from a few tens of millions to over a billion today. Telephones now number over six billion (the result of the remarkable recent growth in mobile phones) and air travel has grown from almost zero to several trillion passenger-km in recent years. In Figure 1 we show several examples of how many trends accelerated around the middle of the 20th century.
Are there any signs that this great acceleration is nearing its end? Richard Heinberg [13] speaks of "Peak Everything" in his book of the same title. The idea of peak consumption was first applied to oil, but has recently been extended to "peak water", "peak phosphorus" and so on. The claim of a near-term peak in annual oil production (and so consumption) is strongly contested. It is widely agreed that the peak in conventional oil production has occurred, or will very soon. Increasingly, we will have to rely on unconventional sources such as deep ocean oil and Canadian oil sands, with far higher monetary and environmental costs.
But what has passed largely unnoticed is that, when expressed on a per capita basis, peak oil occurred over three decades ago. In 1978, global average oil production was 5.6 barrels/capita, but dropped at the time of the second oil crisis in 1978-1979, and levelled off at around 4.7 barrels/capita. Even the optimistic forecasts of the US Energy Information Administration [5] do not foresee it doing more than maintaining its present level. Peak oil researcher Colin Campbell [14] even foresees per capita oil consumption falling to under 1.5 barrels per capita by 2050. The consequences are profound, since present oil consumption in the world is very unequal. If some countries move to expand their per capita levels of use-as are China and India-then other countries will have to do with less. We no longer have the twin safety valves of recent decades: the shift from oil to other fuels, especially natural gas, for electricity generation, and the collapse of the Soviet Union, which rapidly led to a 50 % decline in oil use in the successor states [3].
Another important point has passed almost unnoticed. The big arguments about fossil fuels (some researchers see not only oil but natural gas and even coal showing production peaks and then declines within a couple of decades) is whether production will peak in the very near future or in the lifetimes of the next generation. There have been thousands of human generations-the big argument seems to be whether ours or the next generation will see off nearly all the easily-accessed reserves. Not only do we have to consider international equity in resource allocation, but also that between generations. In brief, living sustainably means we should not compromise the ability of future generations to meet their human needs.
Equally important is the occurrence on a per capita basis of peak grain (and grain supplies most of the world's food) in 1984. For the world's ocean fisheries, the per capita peak occurred in 1988, and in 1997 the catch peaked on an absolute basis [6].

The Earth We Are Creating
Natura non facit saltus-nature doesn't make a jump (Carolus Linnaeus) In Victorian times, this dictum of Linnaeus about gradual change was implicitly endorsed by most natural scientists. Scientists were gradually becoming aware of the immense span of Earth's geological and biological history, which offered plenty of time for even slow rates of change to effect major transformations. The estimates of tens of millions of years for the Earth's age by the end of the 18th century were upped to billions of years by the end of the 19th century. There were catastrophist theories in geology, but these were seen, often unfairly, as being tainted by literal interpretation of the bible, as in the Noachian flood story [15].
Today, both physical and biological scientists are much more aware of the importance of rapid change. In evolution the "punctuated equilibrium" theory of Gould and Eldridge [16] saw evolution proceeding by short bursts of evolutionary change interrupting morphological stability. As discussed above, life on Earth has already been subject to five catastrophic extinction events. In all areas of science, researchers are uncovering more and more empirical examples of changes in both Earth physical systems and ecosystems that are non-linear, complex, path-dependent, inter-connected or chaotic. As the noted science writer Fred Pearce put it: "nature doesn't do gradual change". However, this claim, like that of Linnaeus, oversimplifies the workings of the natural world. As we will see, in complex systems it is not always clear what the underlying causative mechansism is for change; although change iself might be rapid, the events which give rise to it might occur over much longer time scales.
What are the human implications of the Great Acceleration and the changes it has wrought? A group of prominent Earth scientists led by Johan Rockström [1] entitled their landmark 2009 article "A safe operating space for humanity". They argued that we are heading toward thresholds or tipping points in a number of areas, points that once crossed could irreversibly lead to structural change in vital Earth systems such as climate or ecosystems. They identified nine Earth system processes and their associated numerical threshold values, any of which if exceeded could lead to "dangerous environmental change". They are: 1. climate change 2. rate of biodiversity loss 3. nitrogen cycle and phosphorus cycle change 4. stratospheric ozone depletion 5. ocean acidification 6. global freshwater depletion 7. change in land use 8. atmospheric aerosol loading 9. chemical pollution The authors claimed that the first three of their Earth system processes have already exceeded safe limits. For the case of climate change, at around 400 ppm in 2014, atmospheric CO 2 levels are now well over their recommended 350 ppm, a value based on ensuring the continued stability of our major ice sheets. For the remaining items, we are closing in on their recommended safe values, except for stratospheric ozone depletion, which is responsible for both the steady decline in Earth's stratospheric ozone layer, and the larger decrease at the poles each spring (the "ozone hole"). Only for this case are we taking effective action in moving to an environmentally safe position. With the Montreal Protocol, which was ratified by 196 nations in 1989, countries agreed to phase out the manufacture of CFCs, and to replace their use for refrigerants and other purposes with more ozonefriendly products.
The Montreal Protocol has been judged as successful, in marked contrast to attempts to limit GHG emissions. What are the reasons for this success? First, replacement products that did not damage the ozone layer were readily available. Second, the quantities of CFCs annually released to the atmosphere were also very small compared to releases of CO 2 . Even at its peak around 1990, combined production of CFC-11 and CFC-12, the two most important ozone-destroying compounds, was less than one million tonnes [17], compared with CO 2 releases from fossil fuel combustion alone of 34.5 billion tonnes in 2012 [3].
William Laurance [18], a tropical biologist, explains that tipping points can arise in nature in several different ways. Chain reactions in strongly linked systems can occur not only in nuclear fission, but also in epidemics. One person carrying a disease such as flu can infect a few other people, who in turn can each infect a few others, so that the disease after an apparently slow start, can suddenly become a pandemic. The existence of abrupt thresholds in non-linear systems is another cause of tipping points. Lakes or ponds can receive a steady input of nutrients from runoff without visible damage, then suddenly change from clear to turbid with algal blooms. Yet another way in which tipping points arise is through positive feedbacks. Negative feedbacks tend to stabilise a system, but positive feedbacks act to destabilise them. An example of a positive feedback would be Arctic sea ice. Sea ice reflects more sunlight back into space than open water-we say it has a higher albedo. As the Arctic region warms, as is presently happening, the area covered with summer sea ice shrinks, which leads to more open water, which absorbs more sunlight, and so on.
Clearly, tipping points would be less of a problem if we could predict them. We would still need to take effective action in stepping back from the brink, but at least we would know at what level of CO 2 ppm, for example, sudden change to a new dangerous state would happen. However, for the climate system, a Danish study [19] of past instances of natural climate change (the so-called "Dansgaard-Oeschger events" of rapid climate change in the last interglacial period) concluded that they had very limited predictability. Certainly, mathematical climate models have a poor record of retrospectively "predicting" these events. Given the general interconnectedness of the nine Earth system processes listed above, it is likely that predictability for any of them will be limited.

History repeats itself. It has to. Nobody listens. (Steve Turner)
The problem highlighted by the above quotation goes all the way back to the fall of Troy. In this powerful myth, the walled city of Troy is beseiged for many years by a Greek army. Cassandra, the beautiful daughter of King Priam of doomed Troy, is blessed with the gift of perfect foresight, but is fated never to be believed. In vain she cautions the Trojans to "beware of Greeks bearing gifts"-the wooden horse left outside the walls by the seemingly departing Greek ships. However her dilemma is real, not mythological: scientists have been issuing increasingly strong warnings about the dangers of global climate change and biodiversity loss for several decades, but judging by the world's response, nobody is listening. So even if we do develop methods for predicting thresholds points in either the natural and social worlds, there is no guarantee that the relevant policy-makers will heed the message. Before Hurricane Katrina swept through New Orleans in 2005, scientists had accurately predicted that the levees would be breached for a hurricane of that intensity [20].
However, influential corporations, particularly energy industries-and in our view, probably accurately-believe that effective action to step back from the brink will gravely harm their future prospects, profits and control over the political process. They have often been in the forefront of efforts to prevent adoption of any realistic policies for climate mitigation, and are a source of funding for prominent climate sceptics [21]. They have a variety of arguments/excuses for inaction. First, they can argue that the climate is not changing, but undergoing natural variability. The second line of defence is to claim that any climate change that is occurring is not caused by anthropogenic release of greenhouse gases, but for other reasons-variation in solar output is a perennial favourite. The third and final argument would be-after a further decade or two of inaction-that it's now too late to do anything but to adapt to the rapidly-changing climate.
The leading response to the problems thrown up by the Great Acceleration of population, GDP, natural resource depletion and global pollution is to attempt to solve them using solutions that have worked in the past, namely those based purely on technology. The remarkable advances in Information Technology seem to lend credibility to this approach. Technological fixes promise to solve these multiple and increasingly interwoven problems without the need for any serious social or political changes. Accordingly, they are very popular with policy makers. As we have shown in earlier publications [7,22,23], such optimism is not warranted. We will illustrate what is involved in technological fix solutions by examining Representative Concentration Pathway 2.6 (RCP2.6), one of the four scenarios chosen by the Intergovernmental Panel on Climate Change (IPCC). We examine bioenergy in detail, as it forms one of its major components. In this low greenhouse gas emissions scenario, radiative forcing is reduced to 2.6 watt per square metre by 2100. (The radiative forcing gives the change in energy flux to Earth, measured in watts/m 2 (W/m 2 ), compared with the preindustrial era [8,24]). The global average surface temperature, relative to pre-industrial, would peak by 2050, then fall to a level well below 2 °C by year 2100.
The RCP2.6 scenario achieves its low emissions mainly by a combination of the following:  Carbon capture and sequestration (CCS) from fossil fuels (coal and natural gas) and biomass (which gives rise to negative CO 2 emissions)  More use of non-fossil energy sources, particularly biomass and nuclear  Improved energy efficiency, giving big reductions in energy intensity.
Importantly, like many similar scenarios, what RCP2.6 promises is that sufficient energy can be produced to meet our planned economic growth via use of low emission technologies, provided we have sufficient access to the Earth's recources, including its ecosystems. RCP 2.6 assumes that carbon capture and sequestration (CCS) will cumulatively sequester a total of 2200 Gt CO 2 by 2100, beginning in 2020. CCS has been discussed for two decades, but present total annual sequestration is only a few million tonnes CO 2 , including CO 2 used in enhanced oil recovery. This quantity is negligible, considering that global CO 2 emissions from fossil fuel use have risen from 22.6 Gt (Gt = gigatonne = 109 tonne) per year in 1991, when the first IPCC report was released, to 34.5 Gt in 2012 [3].
The main approach discussed would be capture of CO 2 emissions from concentrated sources of emissions such as large coal-fired power plants. CO 2 concentrations in the exhaust stacks of power stations are several hundred times that in ambient air, making collection much easier and cheaper in both energy and money terms. The collected CO 2 is then to be either compressed or liquefied and sent by pipeline to the sequestration site for permanent burial. Possible sites for disposal include disused oil and gas fields and salt mines, deep saline aquifers (including sub-sea ones) and even in the ocean depths [6].
CCS would only address the climate change problem, not the fossil fuel depletion problem. Probably only about 40 % of CO 2 emissions from fossil fuel combustion could be feasibly captured, without incurring very high costs. Further, we need to consider CO 2 emissions from land use changes (eg. deforestation) and emissions of other greenhouse gases. CCS faces many serious problems, including popular acceptance of the vast scale of sequestration needed, the high energy costs of capturing CO 2 at power plants, risks of microseismicity both from injection, and to storage integrity, potential for CO 2 leakage, and possible limits on annual injection rates [6].
Under RCP 2.6, global primary energy supply is estimated to double by 2100 from its current value of 522 EJ (EJ = exajoule = 10 18 J) [3]. This value represents about a 30 % per capita increase over current consumption. By 2100, the combined share of alternative energy (nuclear and renewable) in the global primary energy mix will have risen to around 55 %. Alternatives will be dominated by large increases in nuclear and biomass energy. Bioenergy is expected to provide about 250 EJ of total energy, which amounts to more than a five-fold increase in that currently supplied. Nuclear energy is projected to reach about 180 EJ by 2100 (up from around 10 EJ today, despite the fact that nuclear output has fallen steadily since 2006) [3]. Further, Dittmar [25] has presented evidence showing that such high levels of output would ultimately face uranium supply limits.
Biomass energy is anticipated to be the largest single source of energy in 2100. Yet bioenergy plantations will increasingly compete with land for food, forestry, and pasture. A recent study [26] showed that typical yield estimates for large-scale bioenergy are far too high, which means that energy inputs form a larger fraction of gross energy output. Another recent study [27] found that afforestation could actually make matters worse, because the carbon sequestration benefit would be offset by a decline in surface albedo, leading to higher levels of absorbed insolation. Bioenergy plantations would presumably face similar problems. They could also expect to face additional stresses from a changing climate, in particular water shortages [29]. It is estimated that more than 50 % of fresh water run-off that is temporally and spatially available is already appropriated by humans. Thus water shortages alone could act to limit any future growth in bioenergy.
From an Earth System perspective, any additional demands on biomass must also be considered in the light of the proportion of global net primary production already appropriated by humans [30]. Land based net primary production (NPP) is estimated to be around 1900 EJ [6]. Some estimates suggest that human use of the primary production from vegetation (HANPP) has risen from 13 % to 25 % over the last century [31]. The large scale expansion of bioenergy within RCP 2.6 would see this percentage increase significantly, even assuming demand for food, forage, and fibre remains constant as population increases to almost 11 billion by 2100.
Further, an important additional aspect of RCP 2.6 is the need for negative emissions of CO 2 to the atmosphere by sequestering emissions from bioenergy (BECCS). BECCS, like fossil fuel CCS, is yet to be developed. Although the range is somewhat large, proponents suggest that between 0 to 20 Gt CO 2 /year could be sequestered by 2100 [32], with the upper limit being roughly equal to 1990 global CO 2 emissions. While the lack of fossil fuel CCS has already been noted, an additional barrier to the scale of implementation necessary to achieve the radiative forcing of RCP 2.6 is the limit imposed by the relatively low energy content of biomass. At roughly 15-18 MJ/kg, bioenergy contains significantly less energy than fossil fuels, so energy production becomes highly sensitive to energy inputs. It is estimated that CCS can reduce fossil fuel energy generation efficiencies by 25 %, thus requiring more fuel input to balance losses [6]. For bioenergy, the low energy content of biomass means that energy inputs for harvesting, transport and processing, as well as CO 2 emissions disposal, further reduce the net return relative to fossil fuel electricity production. Meeting these additional energy inputs for a fixed net supply of energy will place even further demands on the quantity of biomass needed. Ultimately, the limit in such systems is the need to have a net energy return, that is, energy output/energy input >1.0. This ratio may need to be much higher, perhaps 2-5, to be sure that all uncertain inputs are included [33].
Although there is a doubling of primary energy supply, RCP 2.6 assumes a nearly fivefold reduction in global energy intensity (GJ/$ global GDP) by 2100 [28]. Large energy efficiency gains would reduce the need for rapid uptake of CCS and alternative energy sources, as discussed above. But equipment efficiency gains will be offset by both the move to non-conventional fossil fuels, with their net lower energy returns (eg. oil production from tar sands) and to energy-intensive CCS. It will need ever-rising primary energy production for each EJ of net energy used in the economy. Further, any large decline in energy intensity will promote energy "rebound" [6]. One reason for rebound is that any gains in for example, car fuel efficiency, will reduce the per km costs of motoring, leading to some travel (and with it travel energy) increases. Even if travel does not rise, the money saved is now available for purchase of other goods and services, which again would lead to further energy use. And energy efficiency can conflict with other desired efficiency measures, for example land efficiency in agriculture, or time efficiency (speed) in transport [34].
From the foregoing it is clear that, rather than shifting us to a future where maintenance of the Earth's resources and ecosystems is an essential part of any solution, energy scenarios such as those represented by RCP 2.6 are likely to replace one set of problems by another. It would be folly indeed for us to destroy the very ecosystems we seek to protect by consuming them along the path toward a stable future climate.

Surviving In An Uncertain World
I like the dreams of the future better than the history of the past. (Thomas Jefferson) So, how can we cope with the rapid changes to our physical environment? Despite the progress of the relevant sciences, we will find predicting the future behaviour of the Earth system increasingly difficult if we continue on our present path. Again we can use climate change as an example. Climate scientists speak of climate sensitivity, defined as the equilibrium global average temperature increase for a doubling of CO 2 ppm in the atmosphere from pre-industrial levels. According to the Intergovernmental Panel on Climate Change [8], the best estimate is about 3 ºC. But climate sensitivity is poorly constrained at the upper end of its probability distribution-science-speak for "we really can't be sure".
The problem is this. The greenhouse gases (mainly CO 2 , methane, nitrous oxide, tropospheric ozone and CFCs) in the atmosphere give a positive climate external forcing that adds to the sun's incoming radiation. But aerosols, mainly sulphate pollutants, block out some incoming radiationthey provide a negative forcing. The algebraic sum of the two is the net forcing. The difficulty is that although we can accurately measure the positive forcing, the global patchiness of aerosols and their interaction with clouds make it difficult to determine the negative forcing [35,36]. Recent estimates by the IPCC [8] suggest that aerosols (including their interaction with clouds) reduce radiative forcing by around one W/m 2 , although estimates range from about a reduction of about two to zero W/m 2 . So we can't accurately determine climate sensitivity from measured temperature increases. If aerosol negative forcing is small in magnitude, sensitivity is small, but if large, then climate sensitivity to a CO 2 doubling is much higher.
Since these sulphate aerosols are dangerous pollutants in the lower atmosphere where we live and breathe (the troposphere), most nations are increasingly trying to reduce them; large scale use of CCS and BECSS would greatly aid this process. What if we succeed in eliminating this unintentional aerosol geoengineering? If they have in fact been protecting us from much of the global warming, by removing them we will merely swap the known and serious health effects of air pollution (particularly in Asian megacities) for the uncertain effects of a rapid rise in temperatures-not an easy choice to make.
Similar uncertainties would apply to other climate change problems such as climate forcing from changes in the surface albedo due to ice melt, ocean mixing of heat, or the response of vegetation (and crops) to changes in temperature, precipitation, CO 2 levels and fire and pest risk. The issues raised would be similar for other global environmental problems. They are very complex, and often involve numerous feedbacks at varying time scales.
So we need to make sure that all Earth system processes stay within safe operating limits, and given the uncertainty in exactly where the boundaries lie, we need a wide margin of error. This means we will need to massively cut our rate of fossil fuel use, our conversion of forests to other land-uses, and also cut down on water, air and land pollution. Conventional suggested solutions to these multiple and interconnected problems include replacing fossil fuels by alternative fuels and improving energy efficiency, as discussed above, and increasing agricultural output on existing farmland. As the continued general drift toward unsafe thresholds shows, these solutions are not working.
[37] have put forward a "shrink and share" proposal. As the name suggests, their proposal would see reductions in fossil fuel use, greenhouse gas emissions, pollutants and so on, but in the context of a move toward global equity. At present, access to the Earth's resources, and use of Earth's pollution absorption capacity are very unevenly distributed among the world's people. For example on a national average per capita basis, car ownership, annual air trips, and electricity consumption vary a 1000-fold or even more between rich nations, mostly in the OECD, and poor countries, many of them in tropical Africa [3,6].
Only about 20 % of the world's people have a high level of material consumption. What would happen if all humans wanted parity in consumption with the OECD, given that not only is world population still growing, but so is the expected level of consumption in OECD countries? Already, according to Kitzes and colleagues, our demands on the biosphere have exceeded its capacity by 20 %; in other words, we are already in unsustainable "overshoot". The findings of Rockström et al. [1], discussed above, also support the notion that our rising demands are pushing the planet closer to irreversible tipping points. It follows that there is little chance of even a continuation of present unequal global consumption levels, let alone a move to global-and inter-generationalhigh-consumption level equity.
If technology alone cannot give us a sustainable and equitable future, we will need to re-think how we live our lives. We will need to reorient our economies toward satisfying the basic human needs of all humanity, at the same time minimising both our use of non-renewable resources and pollution of soil, air and water. This will require us to develop innovative solutions to our cities and transport systems, for example, designs which view cities and their transport as a system, and focus on meeting human needs under rising environmental and resource constraints. But we will need to look further than these modest goals. The UN Millennium Development Goals as adopted by the UN in 2000 set out a wide range of goals we should achieve by 2015 if a more equitable global society was to be forged. Included among these are eradication of extreme hunger and poverty, universal primary education, empowerment of women, reduction in child mortality, and environment sustainability [6]. We are clearly still far from meeting these goals, but it is unarguable that these must inform our search for a truly sustainable supply of energy.
The changes we must make cannot be short-term only, as much of the recently emitted CO 2 will stay in the atmosphere for thousands of years. In designing these new ways of living we can learn much from some of the coping strategies used by presently low-consumption societies-and from the practices of our own societies a few generations back, when human labour contributed more to meeting our needs than it does now. As we have shown, we can no longer rely on biomass (and animals) to supply all our energy needs as they did before the industrial revolution. This lifestyle change should be welcomed rather than endured. Much of our consumption is geared toward satisfying corporate needs for ever expanding growth, dressed up as human needs. And conversely, those human needs that cannot presently be catered for by the market are shortchanged.

Conclusions
Because things are the way they are, things will not stay the way they are. (Bertolt Brecht) Living on Earth has always carried some risk for humans. A host of scientific and technical advances has allowed humans much control over the natural world, and blunted the impact of natural hazards such as drought and disease, particularly for those of us fortunate enough to live in industrialised countries. But these same advances have also led to a vast rise in the use of Earth's non-renewable resources, and to the overwhelming of the planet's pollution absorption capacity, particularly for CO 2 emissions. These changes to energy flows and material cycles have pushed the Earth system closer to a number of tipping points. Rather than gaining control, we risk a descent into the chaos of sudden climatic and environmental changes. Living on Earth is once again becoming a very risky business.
Our knee-jerk response to our predicament has been to call for the application of yet more technology, and greater consumption of the Earth's resources. But we have argued that such an approach has little chance of success. Making the train go faster will not get you to your destination any quicker if you are on the wrong train. The technology and science that has brought about the great acceleration in our society has solved some problems, but has brought others into being. Our future must be directed toward meeting the genuine needs of all humans, and this is best done by safeguarding the ecosystems which sustain us, and in such a way that future generations are not left to pick up the pieces. Rather than making the Earth's ecosystems subservient to human science and technology, as illustrated by far greater use of biomass in our energy system, our efforts must be put into directing these towards protecting the Earth, and us.