Did computer vision get worse lately?

It’s worse and gives worldwide species if I turn the option off.

As another user has pointed out it seems to change based on user. I got Noctua pronuba but the other person got Orthosia suggested.

This is a good example. On the upload screen for batch submission it gave VERY limited suggestions (Noctua pronuba) but after I submit and go to the observation like you did (your screenshot) it shows the much better AI suggestions.

Is it something about the image upload screen that the computer IDing is limited? After checking my recent observations with the issue, it seems all of them work much better now as opposed to when I was prepping them for upload.

I was just uploading some stuff, and so this isn’t the best example, but it might be a good starting point to explain the problem.

To be fair, this is obviously an unusual observation: my intended observation was the wasp galls, which are out of their element. But the AI picked some curious options…snails, rabbits, or several types of psychedelic mushrooms. Maybe the AI is thinking of opening a very experimental French psychedelic fusion restaurant?

Actually, let me clarify in case that sounded harsh or sarcastic: with lots of normal observations, it is still as good as it ever was. With common flowers from normal angles, it guesses them. But when it is something like this, or perhaps a bird in flight, it throws up a number of guesses seemingly without connection.

It’s a photo without obvious focus, many other observations also have objects hiding somewhere, system doesn’t see the difference and thinks it’s just again a rabbit behind grass or mis-ided shroom again somewhere in the grass (and it does look like shrooms from this preview), they just look similar for the programm. It was always working this way with such photos in my experience.

5 Likes

Could well be rabbit droppings!

2 Likes

Can you please send the original photo to help@inaturalist.org? I’d be curious to test it out. Please make sure its metadata is intact.

Those seem like logical suggestions to me. That totally looks like a photo of rabbit droppings, mushrooms, or snails. Cropping can help a lot, I suspect you might get very different results if you cropped the photo to the subject - most gall photos on iNat are pretty tight shots, in my experience. Remember that iNat computer vision isn’t trained to identify taxa, it’s trained to identify iNaturalist photos of taxa. It processes at your photo and spits out results that say “this looks like other iNaturalist photos of [X taxon]”.

5 Likes

I had a bunch of Orthosia and all of them auto-suggest O. hibisci now. I will see if I can replicate the problem and send the image, but I can’t recall which Orthosia photo had the issue.

Thanks for the update.

The link to the year-old note on training the model raises the question of when do you anticipate completing an update? I see that it is extremely resource-intensive to run but I am curious about what cycle you are thinking of going forward. I assume there is a trade-off between capturing the amazing new data and the time required to run the model.

Tony - no need to reply, Chris provided the answer below.

1 Like

I’d have to find the post, but they recently wrote any new run is on hold, as it requires physically building new servers, and until iNat staff are cleared to return to in-office working, this is not possible.

Post here https://forum.inaturalist.org/t/computer-vision-training-status/21083/10

5 Likes

got it, thanks for the link

Just to clarify – there is a model currently being trained. It’s using the images posted to iNat before September 2020. The model after that one is the one without a start date.

6 Likes

Is it taking already added ids too seriously or just going weird things like that? I have no idea what’s confusing it in this case, it’s pretty obvious it’s a moth or at least a winged insect.

3 Likes

The AI model is trained by showing it a bunch of observation photos and the identifications iNat users have given to them, and it “teaches itself” to distinguish among them with relatively high accuracy. There are no “negative” or “irrelevant” data in the training dataset, and there are lots of “relevant” data that are left out (taxa without at least 100 observations). As far as the computer is concerned, the only possibilities are the subset of organisms that iNat staff have shown it.

I suppose they could add non-identifiable or non-organism training data, and then sometimes the computer vision would say “this is no organism at all!” But what is the value of that? If a person is uploading a photo of a chair, they are not engaging with iNaturalist in good faith, and the suggested non-ID is not helping them. If it’s an unusual organism (such as one that isn’t in the training set) or a low quality photo, then “no organism” could deprioritize possible correct, or partially correct, IDs. Notably, despite the low-quality photo, iNat’s computer vision seemed to pick out the face of your “bambi-bee” and suggest an animal, which is partially correct.

Unfortunately, it’s impossible to know with certainty why the AI has made one suggestion over another, because the training process writes its own code, which is usually unreadable to humans.

3 Likes

Well, it looks like a dog head, with 2 ears and snout, all brown, so I get AI responce. Bigger differenes can be seen in image of full organism, but zoomed in and zoomed out pretty far.

2 Likes

In fact depends, hah, but really, its mandibles’ part is very long for a regular bee photo, so, maybe cropping another pic and try it?

I agree, but I doubt it will ever be as good as human, not soon at least. I just think I could show this pic to some non-naturalists and many of them will be confused even if they know how bee face looks, you have to analyze shape of eyes, etc. while as I understand it current system looks at the whole picture and can’t divide parts.

3 Likes

@odole and @fffffffff please watch your tone and try not to derail the discussion.

M, what? I didn’t say anything bad, and I get what @odole means, though I think some cases are too hard for any kind of intellect. I don’t get tone reference at all.

3 Likes

I’m not part of the iNat staff, just a member of the community. I explained my understanding of how the computer vision model works, but I’m not the person to defend it. I don’t have the power to make them change it to random forest or anything else.

I think it’s remarkable that the computer vision works as well as it does, and I am acutely aware there’s room for improvement. Some of its shortcomings could be overcome with more/better data, and probably some can’t.

I don’t believe iNaturalist is trying to create an authoritative taxonomic classifier. Expert humans, who have real understanding of the organisms, will always be the best. I find it most useful to consider the computer vision suggestions like an enthusiastic amateur identifier, like many members of our community: often right, sometimes confused or mistaken, but ultimately one among many voices who can weigh in on any observation.

8 Likes

Something we’re working on, but please keep in mind that iNaturalist has a total staff of 8 people, none of whom work on computer vision full-time (and some like me aren’t coders at all). With our current resources, we’re doing what we can. That’s just the reality of where things stand at the moment.

@chrisangell provided some really good answers about how iNat’s CV works and what it is and isn’t, thanks Chris. Not sure I have anything to add there, although we’ve done some experiments that show you which part of the photo is being used to determine a suggestion, which is really cool and would remove some of the “black box” mystery behind CV’s suggestions. Still lots of kinks to work out, though.

Regarding @odole’s bee photo - as I stated earlier, iNaturalist’s computer vision model is trained on iNaturaist photos of organisms. Almost no one uploads combined photos like this one, and I suspect almost no one uploads photos of bees that just show the head - usually most of the body is in the frame. Thus the CV model won’t recognize a photo with a dark blob in one corner and a sharp wide shot of a flower on the other side (or just a blurry bee head close-up) as a photo of a bee. I recommend posting separate photos rather than combining them.

Regarding @astra_the_dragon’s comment, I think she was referring to odole’s sarcastic sour grapes remark (at least it came off as sarcastic to me, as well as others) and that the topic has strayed quite a bit from its original question, which I answered here and here (and others of course gave great responses as well). If someone can find a consistent issue, please file a bug report. I’m going to close the topic as the original question was answered.

8 Likes