Are the AI suggestions the same for everyone?

This question came to my mind after several times when I corrected obviously wrong identifications of observers in their bee observations. As the suggested species were not common ones (ie not honeybee or some species of which many non-specialists might be aware) I supposed that it was a result of AI suggestion. However, when I looked at these suggestions myself before typing my ID, I didn’t see the species proposed by observer (and frequently AI identified a species correctly!). So I’m wondering where these wrong identifications could come from. Could AI have suggested other species to me and to them? I identified on the desktop version, if this can matter.

6 Likes

There is a little sparkling shield icon on IDs made with the computer vision, you can use that to check if the CV was used.

5 Likes

Just leading off with a quick note that “computer vison” or “CV” are preferred terms as the model that iNat uses really isn’t artificial intelligence.

Seek users have access to a different CV model which doesn’t have as many species/doesn’t perform quite as well (since it needs to run offline). If you see that some observations are coming from Seek, that could be an issue (I often encounter this).

11 Likes

Also, iNat’s computer vision model gets minor updates roughly every month and is retrained comprehensively about once a year. So if you’re identifying older observations it could be that the CV suggestions were provided by an earlier version of the CV model.

Also, CV takes geographic proximity into account. If an observation initially had an incorrect location, that might cause incorrect CV suggestions.

13 Likes

I think CV suggestions may change depending on the IDs on the observation. E.g. the same picture left unknown vs. ID’d as Plantae vs. Animalia may produce different suggestions?

2 Likes

I think that I saw this in recent observations, and wrong locality is rather not so common, so I think that it might have been identifications by Seek as suggested above

2 Likes

This is for sure the case. On the observation page, it will filter to suggestions of I think just the same iconic taxa. So an unknown observation will have different suggestions than an observation with an iconic taxa. On the identify modal, by default it will filter to just suggestions with the same parent taxa, but you can manually edit the filter to be any level you want.

1 Like

Maybe I don’t get what you mean but I identify bees and both ID by observer and iNat suggestions are bees (edit - now I see that it is reply to @annakatrinrose - then I don;t deny the mechanism but it’s rather not the case in situations I encounter

Back to the original question, about a year ago I had a case where I was using the iNat Android app to submit an observation; when asking for suggestions at the time the app gave me a wrong top suggestion. After submitting the observation if I go back in (still on Android) and ask for suggestions it then gave me the correct top suggestion. Last year this was reproducible using the same photo, but not anymore. When using the website the CV always gave the correct suggestion.

So to answer your original question literally, I believe the answer was “No” as of a year ago.

1 Like

True True

The computer vision is certainly AI. Artificial intelligence is a very old term, whose use has shifted as tasks that were once challenging have become solved. But computer vision is still an AI task.

1 Like

“Computer vision is a field of artificial intelligence (AI) that enables computers and systems to derive meaningful information from digital images, videos and other visual inputs — and take actions or make recommendations based on that information. If AI enables computers to think, computer vision enables them to see, observe and understand.”
https://www.ibm.com/topics/computer-vision

Although I question the “see, observe, and understand” phrase as computers don’t really do those things…literally.

Do they not? I think the underlying principle that enables convolutional neural networks like Exception to outperform older attempts at computer vision for image classification so much is basically that you really do teach the network to see first, then teach it your specific classification problem. Much as a human has to learn to see before they could begin classifying objects by sight (humans do have lots of pretty good boot code for this purpose from evolution). I guess you would call the classification process ‘observing’. I guess the main part of ‘see, observe, and understand’ it doesn’t do yet is just ‘understand’.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.