Provide Relevant Geographic Data/Confidence Level/Accuracy Scores with AI Suggestions

I’m sure the AI is good at recognizing giraffes and flamingos, but it is quite bad with some groups, like corals. A user recently posted 8 coral photos from Christmas Island… each one was misidentified by the AI. Most were wrong at the family level… some at the order level. This is not an isolated incident. It seems to me that the limit of this technology is for distinguishing a clam from a shrimp… not for separating Montipora from Echinopora. There are now hundreds of observations for many of these groups, so it seems unlikely that its struggling from a lack of usable data.

And then there is the issue of it regularly mistaking geographically isolated taxa. How many misidentified Atlantic Ocean corals will it take before it stops suggesting Pacific taxa, and vice versa. These mistakes might not be so readily passed on to users if this basic geographic data was simply included in the output of AI analysis.

And perhaps some users would be less inclined to mindlessly click on the AI offerings if a confidence score were listed for each suggestion. Of course, this only works if the AI doesn’t suffer from Dunning-Kruger. Likewise, it would be informative to provide data showing the historical accuracy rate of the AI suggestions for the given taxon. If, for instance, Genus A is only correctly identified 10% of the time, maybe users would think twice before accepting it.

An example of what I have in mind for a hypothetical coral observation…

Suggested ID #1: Cynarina (Confidence: 42% AI Accuracy: 27/210 Nearest Observation: 512 km)
Suggested ID #2: Homophyllia (Confidence: 40% AI Accuracy: 33/125 Nearest Observation: 268 km)
Suggested ID #3: Scolymia (Confidence: 39% AI Accuracy: 40/151 Nearest Observation: 15263 km)
Suggested ID #4: Scleractinia (Confidence: 95% AI Accuracy: 49386/51346 Nearest Observation: <1 km)

With this extra data, a user can see that both Cynarina and Homophyllia are located in this region and have a roughly equal chance of being correct, which might encourage further investigation on their part. The 3rd suggestion, Scolymia, while morphologically similar (hence the similar AI scores), occurs a world away and immediately stands out as an unlikely choice. And this also encourage the usage of a more conservative approach to identifying. ID #4 here is almost certainly correct, given that juicy 95% confidence score and a high accuracy rate. I’m sure some users would rather opt for the sure thing than a lower-confidence guess at genus or species level.

I may be misconstruing you here, and I apologize if I am, but it sounds like you think the AI is constantly learning, which unfortunately isn’t the case. It takes months to train a model, and the current training run was started a few months ago and is not yet complete. We plan on training about two models a year.

For taxa that have a lot of photos, I find that it’s pretty good at distinguishing between some pretty similar-looking species, such as Sceloporus occidentalis and Sceloporus gracilis. Not foolproof, of course, but I’m pretty surprised at times. So much depends on having lots and lots of correctly identified photos, which is less likely for corals I would imagine.

8 Likes

The issue generally seems to be how often the AI is trained and how many research grade observations there are of any given set of species to train the AI with.

I don’t know how many observations (and extra observations) are needed to train the AI, but I imagine it’s quite a few, a few hundred may not be enough for a good ID, especially if there are similar looking species. In addition, to my understanding of AI learning of this sort, it’s not necessarily about how many IDs of a particular species there are, it needs similar numbers of the other species as well in order to learn how to distinguish between them and provide more than a single ID suggestion.

The geographical aspect I do wish was better (here in SE Asia we are constantly getting insect suggestions for US species), but given the prevalence of invasive and introduced species globally I’m not sure that I’d actually advocate for a geographic component to be included as the default in AI observations.

Based on the suggestions I get, it already takes into account what’s been verified nearby, but what constitutes as “nearby” I don’t know, nor if that “nearby” varies by species (I suspect it doesn’t).

3 Likes

“Seen Nearby” means that there’s a RG observation seen within 100km and within +/- 45 calendar days of the observed on date. (https://forum.inaturalist.org/t/range-covered-by-the-seen-nearby-feature/2849/5?u=tiwane)

6 Likes

While I don’t think it’s wise to add a numeric “confidence score”, I would like to see relevant geographic and temporal data: “closest observation of this species X km away, Y weeks earlier/later in the year”.
Not necessarily by default, it can be a setting or scroll-over “advanced info” button, too.

5 Likes

Most of this is discussed here https://forum.inaturalist.org/t/computer-vision-should-tell-us-how-sure-it-is-of-its-suggestions/1230
and here
https://forum.inaturalist.org/t/better-use-of-location-in-computer-vision-suggestions/915

seen X miles or kilometers away sounds interesting, I wonder how intensive that is to calculate on the fly

7 Likes

On that thread was the discussion about 45 days being not helpful for those of us who look at plants.

2 Likes

Just wondering, for a species that has hundreds of observations from the Atlantic Ocean and a single misidentified observation from the Pacific, does the AI interpret that as meaning that species occurs in both regions, or does it take into account the relative frequency of observations from these regions? And when calculating what qualifies as nearby, is it just geographic distance, or does it take into account the biogeographic reality of our planet. For example, species on opposite sides of the Isthmus of Panama are still quite close to each other, but with minimal overlapping fauna.

1 Like

It only takes a single, misidentified research grade observation for it to add the label “Seen Nearby” in the suggestions, with geographic distance defined here:

4 Likes

Location is not taken into account when training the AI - it’s trained purely on iNat photos. Here’s the threshold for which taxa are including in a training run: https://www.inaturalist.org/pages/help#cv-taxa

When you submit an observation for suggestions, the model determines which taxa your photo is most visually similar to - that’s it. The app/website takes into account “seen nearby” data when displaying the order of the results, but that has nothing to do with computer vision or AI. Much more info about the model can be found here: https://www.inaturalist.org/blog/25510-vision-model-updates

3 Likes

Could you file Seen Nearby first? Those results seemed to be scattered ‘at random’ across the list.

Maybe you should add or alter a few pixels in every photo to encode latitude, longitude, and day-of-year. The AI might start giving those few pixels a lot of weight in its visual similarity rankings. :wink:

4 Likes

I’m going to close this as scores the model uses are not particularly useful without context, and they can be misleading. Only showing “seen nearby” results should reduce the number of misIDs.

1 Like