You couldn’t really do ecology if you didn’t know how to construct even the most basic mathematical model — even a simple regression is a model (the non-random relationship of some variable to another). The good thing about even these simple models is that it is fairly straightforward to interpret the ‘strength’ of the relationship, in other words, how much variation in one thing can be explained by variation in another. Provided the relationship is real (not random), and provided there is at least some indirect causation implied (i.e., it is not just a spurious coincidence), then there are many simple statistics that quantify this strength — in the case of our simple regression, the coefficient of determination (R2) statistic is a usually a good approximation of this.

When you go beyond this correlative model approach and start constructing more mechanistic models that emulate ecological phenomena from the bottom-up, things get a little more complicated when it comes to quantifying the strength of relationships. Perhaps the most well-known category of such mechanistic models is the humble population viability analysis, abbreviated to PVA§.

Let’s take the simple case of a four-parameter population model we could use to project population size over the next 10 years for an endangered species that we’re introducing to a new habitat. We’ll assume that we have the following information: the size of the founding (introduced) population (n), the juvenile survival rate (Sj, proportion juveniles surviving from birth to the first year), the adult survival rate (Sa, the annual rate of surviving adults to year 1 to maximum longevity), and the fertility rate of mature females (m, number of offspring born per female per reproductive cycle). Each one of these parameters has an associated uncertainty (ε) that combines both measurement error and environmental variation.

If we just took the mean value of each of these three demographic rates (survivals and fertility) and project a founding population of n = 10 individuals for 1o years into the future, we would have a single, deterministic estimate of the average outcome of introducing 10 individuals. As we already know, however, the variability, or stochasticity, is more important than the average outcome, because uncertainty in the parameter values (ε) will mean that a non-negligible number of model iterations will result in the extinction of the introduced population. This is something that most conservationists will obviously want to minimise.

So each time we run an iteration of the model, and generally for each breeding interval (most often 1 year at a time), we choose (based on some random-sampling regime) a different value for each parameter. This will give us a distribution of outcomes after the 10-year projection. Let’s say we did 1000 iterations like this; taking the number of times that the population went extinct over these iterations would provide us with an estimate of the population’s extinction probability over that interval. Of course, we would probably also vary the size of the founding population (say, between 10 and 100), to see at what point the extinction probability became acceptably low for managers (i.e., as close to zero as possible), but not unacceptably high that it would be too laborious or expensive to introduce that many individuals. Read the rest of this entry »

In July 2015 an American dentist shot and killed a male lion called ‘Cecil’ with a hunting bow and arrow, an act that sparked a storm of social media outrage. Cecil was a favourite of tourists visiting Hwange National Park in Zimbabwe, and so the allegation that he was lured out of the Park to neighbouring farmland added considerable fuel to the flames of condemnation. Several other aspects of the hunt, such as baiting close to national park boundaries, were allegedly done illegally and against the spirit and ethical norms of a managed trophy hunt.

In May 2015, a Texan legally shot a critically endangered black rhino in Namibia, which also generated considerable online ire. The backlash ensued even though the male rhino was considered ‘surplus’ to Namibia’s black rhino populations, and the US$350,000 generated from the managed hunt was to be re-invested in conservation. Together, these two incidents have triggered vociferous appeals to ban trophy hunting throughout Africa.

These highly politicized events are but a small component of a large industry in Africa worth > US$215 million per year that ‘sells’ iconic animals to (mainly foreign) hunters as a means of generating otherwise scarce funds. While to most people this might seem like an abhorrent way to generate money, we argue in a new paper that sustainable-use activities, such as trophy hunting, can be an important tool in the conservationist’s toolbox. Conserving biodiversity can be expensive, so generating money is a central preoccupation of many environmental NGOs, conservation-minded individuals, government agencies and scientists. Making money for conservation in Africa is even more challenging, and so we argue that trophy hunting should and could fill some of that gap. Read the rest of this entry »

The debate has an interesting line-up of ecologists, geneticists, palaeontologists (including Australia’s own Mike Archer), developmental biologists, journalists, lawyers, ethicists and even artists. I have no doubt it will be very entertaining.

But let’s not mistake entertainment for reality. It disappoints me, a conservation scientist, that this tired fantasy still manages to generate serious interest. I have little doubt what the ecologists at the debate will conclude.

Once again, it’s important to discuss the principal flaws in such proposals.

Put aside for the moment the astounding inefficiency, the lack of success to date and the welfare issues of bringing something into existence only to suffer a short and likely painful life. The principal reason we should not even consider the technology from a conservation perspective is that it does not address the real problem – mainly, the reason for extinction in the first place.

One thing that has simultaneously amused, disheartened, angered and outraged me over the past decade or so is how anyone in their right mind could even suggest that scientists band together into some sort of conspiracy to dupe the masses. While this tired accusation is most commonly made about climate scientists, it applies across nearly every facet of the environmental sciences whenever someone doesn’t like what one of us says.

First, it is essential to recognise that we’re just not that organised. While I have yet to forget to wear my trousers to work (I’m inclined to think that it will happen eventually), I’m still far, far away from anything that could be described as ‘efficient’ and ‘organised’. I can barely keep it together as it is. Such is the life of the academic.

More importantly, the idea that a conspiracy could form among scientists ignores one of the most fundamental components of scientific progress – dissension. And hell, can we dissent!

Yes, the scientific approach is one where successive lines of evidence testing hypotheses are eventually amassed into a concept, then perhaps a rule of thumb. If the rule of thumb stands against the scrutiny of countless studies (i.e., ‘challenges’ in the form of poison-tipped, flaming literary arrows), then it might eventually become a ‘theory’. Some theories even make it to become the hallowed ‘law’, but that is very rare indeed. In the environmental sciences (I’m including ecology here), one could argue that there is no such thing as a ‘law’.

Well-informed non-scientists might understand, or at least, appreciate that process. But few people outside the sciences have even the remotest clue about what a real pack of bastards we can be to each other. Use any cliché or descriptor you want – it applies: dog-eat-dog, survival of the fittest, jugular-slicing ninjas, or brain-eating zombies in lab coats.

The title of this post serves two functions: (1) to introduce the concept of ecological catastrophes in population viability modelling, and (2) to acknowledge the passing of the bloke who came up with a clever way of dealing with that uncertainty.

I’ll start with latter first. It came to my attention late last year that a fellow conservation biologist colleague, Dr. David Reed, died unexpectedly from congestive heart failure. I did not really mourn his passing, for I had never met him in person (I believe it is disingenuous, discourteous, and slightly egocentric to mourn someone who you do not really know personally – but that’s just my opinion), but I did think at the time that the conservation community had lost another clever progenitor of good conservation science. As many CB readers already know, we lost a great conservation thinker and doer last year, Professor Navjot Sodhi (and that, I did take personally). Coincidentally, both Navjot and David died at about the same age (49 and 48, respectively). I hope that the being in one’s late 40s isn’t particularly presaged for people in my line of business!

My friend, colleague and lab co-director, Professor Barry Brook, did, however, work a little with David, and together they published some pretty cool stuff (see Referencesbelow). David was particularly good at looking for cross-taxa generalities in conservation phenomena, such as minimum viable population sizes, effects of inbreeding depression, applications of population viability analysis and extinction risk. But more on some of that below. Read the rest of this entry »

Last day of November already – I am now convinced that my suspicions are correct: time is not constant and in fact accelerates as you age (in mathematical terms, a unit of time becomes a progressively smaller proportion of the time elapsed since your birth, so this makes sense). But, I digress…

This short post will act mostly as a spruik for my upcoming talk at the International Congress for Conservation Biology next week in Auckland (10.30 in New Zealand Room 2 on Friday, 9 December) entitled: Species Ability to Forestall Extinction (SAFE) index for IUCN Red Listed species. The post also sets a bit of the backdrop to this paper and why I think people might be interested in attending.

The journal ended up delaying final publication because there were 3 groups who opposed the metric rather vehemently, including people who are very much in the conservation decision-making space and/or involved directly with the IUCN Red List. The journal ended up publishing our original paper, the 3 critiques, and our collective response in the same issue (you can read these here if you’re subscribed, or email me for a PDF reprint). Again, I won’t go into an detail here because our arguments are clearly outlined in the response.

What I do want to highlight is that even beyond the normal in-print tête-à-tête the original paper elicited, we were emailed by several people behind the critiques who were apparently unsatisfied with our response. We found this slightly odd, because many of the objections just kept getting re-raised. Of particular note were the accusations that: Read the rest of this entry »

Consider the great auk (Pinguinus impennis), a formerly widespread and abundant North Atlantic species that was reduced by intensive hunting throughout its range. How did it eventually go extinct? The last remaining population blew up in a volcanic explosion off the coast of Iceland (Halliday 1978). Had the population been large, the small dent in the population due to the loss of those individuals would have been irrelevant.

But what is ‘large’? The empirical evidence, as we’ve pointed out time and time again, is that large = thousands, not hundreds, of individuals.

So this is why we advocate that conservation targets should aim to keep at or recover to the thousands mark. Less than that, and you’re playing Russian roulette with a species’ existence. Read the rest of this entry »