Identification Quality On iNaturalist

Have you looked at the camera trap AI literature? Camera traps generate huge volumes of data, and AI is used to make the initial IDs and this is then verified by people.
It is over two years since I last looked at this, but two results stick in my mind.
*. AI is far better at making IDs when there is an object present. When there is no mammal (most camera traps arrays seem geared to larger mammals - so there is “nothing” present even when it is trivial to ID 4 or more plant species in an image) present, the AI will find a mammal, but with a low certainty.
*. Training from one camera trap array that resulted in accuracies of over 95%, dropped to below 80% when used on another camera trap array a few hundred km away for the same set of mammal species. (That result still has me floored.)

Just one point about the above results. Your analysis is based in an area with very good alpha taxonomy and a superb resource base of field guides (I assume - I cannot find the geographical coverage for the groups in the links you provided). I dont think it will apply to areas with many poorly known species, or areas with field guides that cover only a small proportion of species in an area, or areas with few few field researchers. In those areas - just like in the AI - the locals will be misidentifying species to the closest match in their field guides. Fortunately, field guides tend to focus on the commoner and more wide-spread species. So accuracy for these more commonly-encountered species will be relatively high (fewer false IDs), but for the rarer species not covered in the local field guides, the proportion of false identifications may be very high, with most (of the few available) observations incorrectly identified as their commoner counterparts. This will then perpetuate when these data are used to train the AIs (a double whammy: rare species wont have enough observations to train the AI, but some of their observations will be incorrectly classified as more common species, and the AI will be “trained” to mis-ID them).

I understand the assumption of the “blind” identification by experts. But why? Do you really believe that your experts will fall for “false leads” sufficiently to be less accurate? In s Afr Proteaceae, I know many of the pitfalls and problem groups, and a pre-existing ID sort of invokes a “better check this” reflex. Given that there are very many experts active on iNat, surely you can do a far more comprehensive analysis based on existing identifications? I would volunteer my identifications (https://www.inaturalist.org/observations?place_id=113055&taxon_id=64517&verifiable=any), but for the fact that in southern Africa we have a culture of agreeing with “our” experts (many of whom are European!), and so their IDs (or mine for Proteaceae) will tend to be community IDs by reputational agreement (i.e. not based on data, but on reputation) . Still there would be the statistics of how often other users changed their IDs following expert ID (and vice versa: how often experts changed their ID based on other user’s input) - in addition to other statistics involving the expert (leading, confirming, maverick). The blind, independent assessment is not the only way of assessing accuracy on iNat. Yes it is true that (local) taxa with active experts will be a very biased sample of those groups comprehensively curated, versus groups without local experts for which it is impossible to measure accuracy to any meaningful extent. But that is also true for any museum or herbarium on earth.

10 Likes