Did computer vision get worse lately?

I’ve noticed lately that the species suggestions when you are uploading on the computer is much worse than it was in the past. It brings back fewer choices and most of the choices aren’t even close. I am speaking about moths mostly which have been pretty good to get to genus level in the past.

For example:

  1. I have the very common Orthosia hibisci and it suggests Noctua pronuba which is nothing even remotely close.
  2. I post a Lithophane sp and it just returns the extremely broad “Noctuoidea”, Catocala, Balsa, and random noctuid moths. Lithophane as the genus used to be easily suggested for this shape moth in the past.
  3. Psaphida rolandi returns the genus and species but Psaphida syracis only suggests ONE species (Cerastis tenebrifera).

I don’t remember the AI only returning 1-3 suggestions and the suggestions being so off. Did they recently change anything. My photographs show enough detail where the AI suggestions shouldn’t have changed much since last season. The AI suggestions made submitting moths easy but now I have to manually type in half the names because it is so far off!

1 Like

A change was made so that by default only species reported nearby are initially shown. It may be that, at the bottom of the list should be am option to turn that off. What happens if you do that ?

5 Likes

Moths are (mostly) so variable that the AI is almost worthless. I only use it for an extremely rough ID - like for Family or Sub-Family. For example, there are many moths that resemble Lithophane spp in general shape. Use it to narrow down a search, but the details must be confirmed. Manually typing in half a name is not a hardship - if the species is not very common often it needs typing in half the species name as well.
EDIT It may seem worse because more moth species and variants are being added daily. More choice for the computer to process. Noctua pronuba is a highly variable moth, so the more added, the more to decide from.

6 Likes

(made a slight edit to the subject as “computer vision” is a more accurate term than “AI”)

As others have noted, computer vision on the web and in Android (and soon iOS) by default now only shows visually similar results that have been “seen nearby”. You can read more here: https://forum.inaturalist.org/t/better-use-of-location-in-computer-vision-suggestions/915/47

You didn’t specify a photo, but if I go to https://www.inaturalist.org/observations/72107514 it’s the top suggestion:

6 Likes

Sure, but that’s because an ID has already been suggested within that genus. (At least as far as I remember, that tips the balance a little, so the suggestions will be different before and after an ID is added.)

As far as the original question, I think the Seen Nearby-only thing was the only recent CV change. Personally I’ve found that it’s gotten better, but I use it for birds not moths - it used to be terrible at recognizing Dark-eyed Juncos, but now nearly always has it in the top suggestions. I would recommend just not using it for moths, and identifying to a high level or typing in the species name yourself if you know it.

4 Likes

That’s interesting - it seems to me that the AI is getting pretty good with moths, at least California moth species, based on comparing the suggestions to expert ID’s later on. It is still pretty bad for some other insect groups, but it does seem to be getting better overall.

1 Like

That’s not the case, at least on the observation detail page when clicking in the Suggest an ID field. If you use “Visually Similar” in the Compare modal, yes, it’s restricted by taxon as well, but not here.

To be clear, the actual model itself, which only measures visual similarity, hasn’t changed since March, 2020. What’s changed is how iNaturalist displays suggested taxa.

8 Likes

That’s good to know. I hadn’t uploaded for a little while either, so I wasn’t sure what had changed and I was wondering the same thing.

As a related question: For the “seen nearby” limit, it’s also limiting suggestions based on when the species was last observed, right? If so, how big is the time window?

1 Like

I think the changes are good, especially for new users who often just clicked on the first suggestion without actually knowing.

Interestingly though I just uploaded a mushroom and on the upload screen the CV suggested a possum but now when I go in to the obs it suggests the correct (distinctive) species.
https://inaturalist.ala.org.au/observations/72120105

1 Like

I also think that the change has been a big improvement. Now as a default it only lists species seen nearby instead of always listing 8 possibilities, even when some of those were clearly not possible. I have found that on the relatively rare occasion when I get a nice clear photo of a distinctive species that hasn’t been seen nearby, the top suggestion (We’re pretty sure that this is…XXX) will show species not seen nearby. Then you can toggle on the “species not seen nearby” and get a good recommendation that way.

1 Like

I believe it’s a 3-month window

2 Likes

But unless it has been changed, it is that window in every year, to account for seasonality, and not start the clock again every year. So if the window is March 1 to April 30, it is those days every calendar year.

Note date range there is an example. I too think it is +/- 45 days, but cant remember for sure.

6 Likes

For me, yes, it did. At least, the changes in it seemed to be counter-productive for me.

Can you please provide some specific examples?

It’s worse and gives worldwide species if I turn the option off.

As another user has pointed out it seems to change based on user. I got Noctua pronuba but the other person got Orthosia suggested.

This is a good example. On the upload screen for batch submission it gave VERY limited suggestions (Noctua pronuba) but after I submit and go to the observation like you did (your screenshot) it shows the much better AI suggestions.

Is it something about the image upload screen that the computer IDing is limited? After checking my recent observations with the issue, it seems all of them work much better now as opposed to when I was prepping them for upload.

I was just uploading some stuff, and so this isn’t the best example, but it might be a good starting point to explain the problem.

To be fair, this is obviously an unusual observation: my intended observation was the wasp galls, which are out of their element. But the AI picked some curious options…snails, rabbits, or several types of psychedelic mushrooms. Maybe the AI is thinking of opening a very experimental French psychedelic fusion restaurant?

Actually, let me clarify in case that sounded harsh or sarcastic: with lots of normal observations, it is still as good as it ever was. With common flowers from normal angles, it guesses them. But when it is something like this, or perhaps a bird in flight, it throws up a number of guesses seemingly without connection.

It’s a photo without obvious focus, many other observations also have objects hiding somewhere, system doesn’t see the difference and thinks it’s just again a rabbit behind grass or mis-ided shroom again somewhere in the grass (and it does look like shrooms from this preview), they just look similar for the programm. It was always working this way with such photos in my experience.

5 Likes

Could well be rabbit droppings!

2 Likes

Can you please send the original photo to help@inaturalist.org? I’d be curious to test it out. Please make sure its metadata is intact.

Those seem like logical suggestions to me. That totally looks like a photo of rabbit droppings, mushrooms, or snails. Cropping can help a lot, I suspect you might get very different results if you cropped the photo to the subject - most gall photos on iNat are pretty tight shots, in my experience. Remember that iNat computer vision isn’t trained to identify taxa, it’s trained to identify iNaturalist photos of taxa. It processes at your photo and spits out results that say “this looks like other iNaturalist photos of [X taxon]”.

5 Likes