Mind your app: Could plant ID applications lead to an increase in extinction risk?

I kinda assumed that this recent short article might already have been discussed on the forum, but I was not able to find a mention. The assumptions, methods and apparent conclusions here all seem problematic and the title seems hyperbolic. If you’re willing to raise your blood pressure a little, take a look! (You need to click the green box labelled PDF to read the paper.)

Mind your app: Could plant ID applications lead to an increase in extinction risk?


Published: 2023-08-17

DOI: 10.11646/phytotaxa.609.1.6


Nothing groundbreaking, to say the least. Not sure why it took 8 authors to state the obvious, and address the ‘extinction risk’ of the title in a vague 5-line paragraph.


I cringed when I read this paper a few days ago

I won’t go into too much detail as I’m about to head out for some fieldwork, but my rough thoughts:

To be fair to the authors I can understand the point they’re trying to make, but overall the paper (imo) is poorly written, poorly executed, and lacks any nuance or any legitimate supporting evidence whatsoever. I think some portion of the ‘blame’ needs to also be attributed to the reviewer (note the singular: why was there a single reviewer?) and (to a lesser extent) the editors; god knows how they let this be published in its current form.

A few of my major concerns

1 . I can’t recall ever reading a paper where the main concept contained within the title appeared so late in the paper, and constituted so little of the actual content. Outside of the title, the phrase ‘extinction risk’ appears literally a single time in the entire paper, in the sixth last line of the final paragraph!! This is just mind-boggling to me . Imagine trying to publish a (non-correspondence) paper on extinction risk in a particular species and you mention the word extinction once in the entire manuscript. You’d get desk rejected in 10 minutes.

To me, this ‘paper’ is like an article you see in some of the D-tier online news outlets; sensationalist clickbait headline, and then when you open the article the apparent main topic is effectively non-existent in the actual text. The framing of the title as a question also makes me think of Betteridge’s law of headlines: “Any headline that ends in a question mark can be answered by the word no”.

2 . They just pretend that human identifiers are non-existent in iNaturalist (I can’t speak for the other apps having never used them, but I assume at least one or two of them are also not exclusively based on AI/computer vision). The way they present the identification process is highly misleading and implies that IDs are only based on computer vision.

3 . To support their claims of ID apps providing misidentifications for plants, they cite literally a single published paper, McMullin & Allen (2022), that does not deal with plants…it’s about lichens… I strongly suspect it’s because it was one of the few papers that actually supported what they a priori expected, ie poor ID quality, so they just shoehorned it in, and if they picked any papers that actually explored ID accuracy in any plant taxa they would have had much poorer empirical evidence for their arguments.

The other two studies they cite are about plants, but they’re unpublished masters theses which, at least from my efforts, are completely inaccessible/non-existent in an online form (but maybe someone here can find them), so you can’t even look at their methods/results to assess them.

4 . They make the statement that (from the findings of those two masters theses) “Notably, the
apps falter in identifying rare and/or endemic taxa”, and provide one example: “Ononis varelae Devesa (1986: 84), an endemic Leguminosae taxon from the southwestern Iberian Peninsula, not correctly identified by any of the apps tested.” This species has literally a single observation in iNat (posted by one of the paper’s authors), so of course it will be impossible for the computer vision to suggest it! To claim this as a shining example supporting their argument and to have the iNaturalist bar for endemic species correctly identified (Figure 2b) sit at 0% is ludicrous, as they intentionally cherry picked species which are obviously literally impossible for the computer vision to suggest because they have so few observations. It’s either intentionally misleading, or they have no idea how the computer vision actually works.

5 . “Traditionally, plants have been identified using dichotomous keys. This specialized and time-consuming task demands meticulous examination and expertise due to the intricate nature of botanical terminology and the challenges posed by certain taxonomic traits” [emphasis mine] is such a weird statement. Is this true of some taxa? Sure. Is this true of a lot of taxa? Sure. Are there many plant taxa to which this statement does not apply whatsoever? Absolutely. This is such a weirdly and obviously hyperbolic claim.

6 . (with respect to increasing use of apps, and decreasing use of books) “increasing time spent
within our classrooms utilizing tools that feature cryptic scientific language, creating confusion and leading our students to erroneous outcomes, and frequently perpetuating outdated taxonomies.” This is maybe one of their most ludicrous statements. I genuinely struggle to see how they equate apps with ‘cryptic scientific language’. This claim is also completely self-contradictory given earlier in their piece they literally said that keys require expertise and time to use due to the “intricate nature of botanical terminology”. And as for apps “frequently perpetuating outdated taxonomies”? Do they understand how print books work?

As my last point because if I spend any longer on this I’ll have an aneurysm, I think the most amusing line in their piece is near the end where they state “For all these compelling reasons…”


This was in regards to apps? I find this funny because it’s a really good description of what it’s like to use a dichotomous key, at least if you are unfamiliar with it.

No i am not anti-dichotomous key, i use them all the time. But seriously, that’s kind of funny.


bizarre, right? At least that’s how I interpreted that section, it was somewhat ambiguously written.

this was the whole section of text:

Our students live in an online world, wherein these apps (along with many others to come) will stay in our pockets, while books grace the shelves of libraries. The potential detrimental effect is generally caused by the increasing time spent within our classrooms utilizing tools that feature cryptic scientific language, creating confusion and leading our students to erroneous outcomes, and frequently perpetuating outdated taxonomies.

seems like they’re saying the apps feature the cryptic language, right?


Yes. And yes.

Each month CV adds new species, and gets better at ‘seeing’ rare and endemic taxa.

1 Like

Before you can even use the key, you need to learn a whole new vocabulary and several anatomy lessons… and then it may require certain very specific conditions to be useable… if it’s even up to date. They’re great tools when created properly, but when there are fewer than 10 options I find it much faster to check against the descriptions manually.


yeah this doesn’t make sense at all. In addition to the issue we noted, no one is using iNat, seek, etc in the classroom, you have to go out and find stuff to observe. Also i don’t feel like people really use field guides less, in my observation they remain popular and some amazing ones have come out lately too.


Bonus points when our field guide authors are active on iNat. (Which helps both ways when they are also the active taxonomists)

1 Like

These echoed my thoughts/comments on this paper almost exactly.

I will also add that it’s deceptive to present a 0% correct identification rate for a species as they do when the species is not included in the model. This makes it seem as though the model was able to ID the species (when it is not - it isn’t included in the model). In fact, no testing was necessary to uncover this result - it was 100% determinable beforehand.

I looked into one of the other apps used in the paper (PlantNet), and, while I couldn’t find a list of species included, I could see that it has never predicted an ID for their focal species, strongly suggesting that it isn’t in the PlantNet model either.

And since we’re comparing CV models and human observers, I think it’s fair to point out that human IDers also have more difficulty IDing rare and endemic species. There are fewer specimens to observe/learn on, many IDers don’t have direct experience with them in the field/haven’t seen them in real life, etc. There are certainly rare endemics that are very distinctive which can be easily IDed visually (and they are prime targets for inclusion in a CV model!), but the solution here seems to me to be to get the pics and include them in the model - this would help protect rare species!

Lastly, I find it hilarious that the authors (given their lack of citations to back up key points) totally ignore examples for which citations are available where iNat (and maybe other apps, less sure there) have helped identify previously unknown populations of rare species and/or even resulting in the discovery of new species (which couldn’t be protected as such before discovery).


This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.