Computer Suggestions: use disagreements as a measure for difficult taxa

Short version: Make the AI learn from its own mistakes by implementing into the algorithm a method to count disagreeing identifications and thus detect problematic taxa (i.e. those with a high amount of erroneous AI suggestions).

Long read:
Computer Vision works great for some taxa and areas and less so for other organisms and regions.

While I am strongly in favor of improving the relevance of locality, probably resulting in higher accuracy and/or more conservative suggestions (genus level and higher) (discussed in this topic), here I want to focus on another important aspect of AI errors:
Species that rarely can be IDed by photos to species level.

For many arthropod groups, there is that one species, that the AI loves to suggest, despite the fact that

        a) the specimen clearly belongs even to a different genus, family...
 and/or b) it is not at all occurring in that region
     or c) it might be that species, but is impossible to exactly ID on photos

This could lead to a ‘vicious circle’: false IDs lead to more suggestions of that species, which lead to more false IDs by others.

Suggestions have been made to take curational steps and ‘block’ some species from the AI pool, if experts state that a correct ID is too difficult or unlikely to be made in the field. I am not in favor of this but rather want to ask:
Why not use the power of crowdsourcing to find those tricky taxa and implement this in the learning algorithm?

How could this be done? By looking at the ratio of disagreements for a certain taxon.

Example: Flesh flies (Sarcophagidae) are often IDed as Sarcophaga carnaria, although a species ID is almost impossible with in vivo photos. If those IDs were made using the AI suggestions and subsequently get corrected by other users to genus level, then the algorithm could learn to avoid suggesting a species in these cases and become more conservative. One just has to define a threshold for the disagreement ratio for a certain taxon.

I would be interested to learn if this would be feasible to implement into the system, and also if there would be any downsides of such a system.

One could imagine that this might negatively affect the AI accuracy of species that are only in parts of their range difficult to ID, as in other areas there would be no similar species around and the ID thus straightforward. However, improving a correct genus to species level is easier then correcting an incorrect species choice.

And ideally, to go one step further, location plus the ‘disagreement ratio’ could be combined, such as e.g. in Western Europe the AI might suggest Erinaceus europaeus for a hedgehog, but in Russia, where several species occur, it would prioritize the genus (similar case with American Crow on the pacific coast vs. possible confusion with Fish Crow in the east).

Let me hear what you think.

Not sure if the system already does this, but it does show easily confused taxa, presumably because someone erroneously selected a CV suggestion. Only works for genus or lower though?

2 Likes

Definitely worth exploring, although perhaps at first on the filtering results end of things, which is much easier to play around with than model training.

6 Likes

Follow-up suggestion:
After thinking about it, maybe disagreements following computer vision selections should even be weighted higher (similar to how RG observations can be overruled).

Don’t know how this could be implemented in the learning algorithm or the filtering, or what the exact factor should be, but I feel this would really improve the quality of suggestions for those taxa which are especially error-prone.

In particular, those commonly observed taxa, where a species ID is never or rarely possible (and thus alternative species suggestions do not make sense), this could trigger the AI the following: 'It looks most similar to this species, which I would normally suggest; but as these observations have so often be corrected, I will only suggest higher taxa'

3 Likes

If anything like this is done, a complicating factor that has to be addressed is how to deal with species that get added to the training as observatiom numbers rise.

The pre-addition records may suffer from corrections as the correct option was unavailable to select, but once included you don’t want it deprioritzed.

1 Like

I agree that a feature aimed at this issue would benefit the community and rein in users that quickly jump to species identification. From my early experience on iNaturalist, I had to be corrected by others in the community on my tendency to go to species-level for arachnids that can only be properly distinguished by their epigyne. Building such a feature into the suggestion algorithm could help earlier users to not rush such ids based on the algorithm.

2 Likes

Every day, there are new observations of the Common Flesh Fly, Sarcophaga carnaria submitted, due to the CV suggestions.
So there must have been many observations in the past to be included in the learning process, because after a thorough curation process, right now the number of observations on species level is down to 5, and other members of this genus have a maximum of 19 observations.

But these learning cycles are still a black box for me - will the CV ‘forget’ those species in the next cycle, as the thresholds are too low now? And instead suggest only genus or higher? Or will it continue to offer those impossible-to-ID suggestions?

2 Likes

@tiwane @alex this illustrates why it would be good to know when the next training set will be created for the computer vision, so that identifiers can have something to aim for when sorting out misidentifications in difficult groups. If we knew the training set would be pulled on 1 June, for example, or even between 1 and 10 June, we could take that into account when going through such groups.

4 Likes