As is their wont, Nature declined to publish these comments (and our responses) in the journal itself, but the new commenting feature at Nature.com allowed the exchange to be published online with the paper. Cognisant that probably few people will read this exchange, Bill Laurance and I decided to reproduce them here in full for your intellectual pleasure. Any further comments? We’d be keen to hear them.

In this paper, Laurance and co-authors have tapped the expert opinions of ‘veteran field biologists and environmental scientists’ to understand the health of protected areas in the tropics worldwide. This is a novel and interesting approach and the dataset they have gathered is very impressive. Given that expert opinion can be subject to all kinds of biases and errors, it is crucial to demonstrate that expert opinion matches empirical reality. While the authors have tried to do this by comparing their results with empirical time-series datasets, I argue that their comparison does not serve the purpose of an independent validation.

Using 59 available time-series datasets from 37 sources (journal papers, books, reports etc.), the authors find a fairly good match between expert opinion and empirical data (in 51/59 cases, expert opinion matched empirically-derived trend). For this comparison to serve as an independent validation, it is crucial that the experts were unaware of the empirical trends at the time of the interviews. However, this is unlikely to be true because, in most cases, the experts themselves were involved in the collection of the time-series datasets (at least 43/59 to my knowledge, from a scan of references in Supplementary Table 1). In other words, the same experts whose opinions were being validated were involved in collection of the data used for validation.

Sridhar raises a relevant point but one that, on careful examination, does not weaken our validation analysis.

As detailed in our Supplementary Information, we made a concerted effort to locate fully independent time-series datasets against which to test our interview findings, but struggled to find usable overlap with the specific protected areas, guilds and potential environmental drivers evaluated in our study. Fortunately, we ultimately located 59 empirical time-series datasets that met several a priori criteria we established, including two important safeguards: (1) most were published only after our expert interviews were completed, thereby minimizing the exposure of most of our experts to these reports; and (2) each of the response variables we tested was derived by averaging up to 4-5 separate expert opinions, thereby diluting the impact of any one individual’s opinion (although we removed low-confidence opinions from our analysis).

Ideally, one would exclude any empirical dataset in which our experts had any involvement at all, even as a minor author, but this was simply not possible. Most of the protected areas we studied had relatively few field biologists with long-term expertise, and most of these experts were included in our study (216 were co-authors). Had we excluded every potential validation study in which one of our experts had had even marginal involvement, we would have had little basis for validation. As it was, the 59 empirical time-series datasets we identified permitted us to test just a small fraction (1.6%) of the 3,589 expert responses generated by our study. Importantly, we acknowledged the limitations of our approach by stating in our Supplemental Information that the safeguards we imposed provided “a more independent test” (not a fully independent test) of our interview data.

These caveats aside, if Sridhar is correct that our validation analysis was compromised, one would expect the datasets in which our interviewed experts had any involvement to agree more frequently with the interview data than did those in which our experts had no involvement at all. Of the 59 validation studies, our co-authors had some involvement in 43, and no involvement in the remaining 16. The former agreed with our interview data in 88.4% of cases (38/43), and the latter in 81.3% (13/16). These minor differences did not differ statistically (Gadj=0.44, df=1, P=0.51; G-test for independence, adjusted for sample size).

Hence, we suggest that the safeguards we imposed were reasonable given the severe constraints on suitable time-series datasets—limitations we readily acknowledged. Our safeguards appear to have been largely effective, as otherwise one would expect the frequency of agreement to differ between those validation tests in which our experts had some involvement versus those with no involvement at all.

Biodiversity is disproportionally concentrated in the tropics, with more than half of all known species inhabiting tropical forests. Although, reserves are not the only strategy to conserve biodiversity, they are believed to be highly cost effective in protecting it1. A relative wave of optimism followed the recent Biodiversity International Convention report that pointed to an increase in the number of reserves created within the last decade in the tropics2. Laurance et al., however, call our attention to a very serious concern: protected areas in the tropics may not be effectively protecting biodiversity. The authors go further and conclude that many tropical reserves are actually losing biodiversity, as they have reported a decrease in the abundance of various sensitive guilds – e.g., apex predators, stream-dwelling amphibians and large-seed old-growth trees – over the past 20 to 30 years. The study was based on a huge number of interviews, but given the practical consequences of this type of conclusion, we identified a major limitation that we believe deserves careful consideration. We suggest that the empirical data they use are not appropriate to infer the ‘health’ state of reserves within the entire tropical region.

According to the latest statistics from the UNEP World Conservation Monitoring Centre, there are now over 157,000 nationally designated protected areas covering about 12.7 per cent of the world’s land area outside Antarctica. Tropical reserves are not distributed homogeneously through the world’s continents and countries, so any study aiming to measure global tropical reserve ‘health’ should carried be out using true stratified and representative samples that consider this inbalance in the distribution of reserves. Therefore, we suggest that regions with more reserves, in number and area, should be more extensively sampled if one is interested in evaluating biodiversity trends in tropical reserves. For example, in Brazil, one of the world’s most mega-diverse countries, which has the largest tropical protected area system in the world with 310 federal, 568 state, 89 municipal, and 629 private reserves, totaling approximately 150 million ha in size (Brazilian Ministry of the Environment), Laurance et al. selected only 1 reserve in the Atlantic Forest (~430 ha) and three in the Amazon (together with ~143,000 ha). These reserves represent less than 0.004% of the area of Atlantic Forest reserves and 0.12 % of Amazon reserves in Brazil. Hence, their data set is not stratified or randomly sampled to assess a very complex issue, such as reserve ‘health’, at least for a large piece of tropical world. This does not mean that Brazilian reserves are not threatened by anthropogenic pressures or that improvements in reserve managementare not needed. What we are suggesting is that simplistic approaches to assess reserve ‘heath’ and unrepresentative sampling designs can result in more problems for conservation in the tropics (e.g., ill-intentioned argumentations that reserves are not important) than solutions based on global representative data also useful for regional or local decision-making. In short, although we applaud the initiative made by Laurance et al. in bringing this issue to our attention, we suggest their letter should be seen as a general opinion of a group of researchers about biodiversity trends in tropical reserves. It is not supported by global representative data in a statistical sense and the limitations of its inferences should have been at least considered in their original contribution.

Roque and Siqueira suggest that, because protected areas are not evenly distributed across the world’s tropical forest regions, conclusions based on our pantropical sample of 60 protected areas might lack statistical validity. This is a pertinent concern because we did hope to identify some broad trends in the biological ‘health’ of tropical forest reserves globally.

However, our selection of tropical forest protected areas was in fact broadly representative, for several reasons:

1) The 60 reserves we sampled were evenly stratified across the world’s three major tropical forest regions: Africa (including Madagascar), the Americas (South and Central America plus the Caribbean), and the Asia-Pacific region (Southeast Asia, South Asia, Melanesia and tropical Australia). A total of 36 different nations were represented in this sample.

2) Based on data from the World Database on Protected Areas (www.wdpa.org), we found no significant difference in the frequency of high-protection (IUCN Categories I-IV), multiple-use (Categories V-VI) and unclassified reserves between our sample of 60 protected areas and all 16,038 reserves found in the same tropical nations.

3) Likewise, we found no significant difference in the geographic isolation of our 60 reserves (travel time to the nearest city of > 50,000 residents) compared to an equal number of reserves randomly stratified across the same 36 nations.

4) Finally, across the tropical nations represented in our study, we recently tested for and found a strong, positive relationship (rs=0.427, n=33, P=0.014; Spearman rank correlation) between the number of reserves we selected and current forest cover1 (China, Nepal and Australia were excluded from this comparison because most of their forest cover is non-tropical).

From these findings we can conclude that the protected areas in our study (1) broadly sampled the world’s major tropical forest regions and represent most tropical nations, (2) reasonably reflect the state of existing reserves in terms of their current legal-protection status, (3) are comparable to existing reserves in terms of their geographical isolation from nearby human populations, and (4) reasonably sampled, at a national scale, the large variability in tropical forest cover. Via all of these measures, we assert that our reserves were broadly representative.

We do, nonetheless, concede that one aspect of our sampling strategy—one that Roque and Siqueira do not mention—might have created a subtle bias. It was impossible for us to sample reserves in a truly random fashion because we could only include those with some expert knowledge (in our study, any reserve with fewer than 10 journal publications and 4-5 experts willing to be interviewed was not considered). We highlight this limitation because recent evidence based on a single reserve suggests wildlife poaching is reduced in areas where researchers are present2. If this finding applies more generally, then our conclusions regarding the condition of tropical protected areas might be slightly too optimistic, and one could underscore the potential benefits of sustained field research for tropical biodiversity.

2 responses

[…] Here’s more on the topic from Corey Bradshaw, one of the study’s coauthors, at ConservationBytes. Share this:TwitterFacebookLinkedInLike this:LikeBe the first to like this. This entry was posted in Changing Biodiversity, Threats and extinction and tagged biodiversity, conservation, ecosystems, reserves. Bookmark the permalink. ← It ain’t half hot Mum! […]