Are you trying to "teach" the AI to recognize things?

I’ve come across a couple of very common, visually distinctive species in my area that the AI didn’t recognize. Now that I’ve figured them out, I’ve taken it on myself to upload pictures of them until the AI starts suggesting them–covering different angles, host plants, and life stages, but also just uploading observations of them I might not bother to make otherwise. I have no idea how the AI learns, so I don’t know how long this will take, or how many observations, but presumably it will work eventually, and in the meantime, more data is good either way.

Does anyone else do this? What organisms have you picked? Did you have any success yet? I’ve been doing Oat Crown Rust, a fungal pathogen of buckthorns, and just started doing the feeding damage of the Four-lined Plant Bug.


My understanding is that, to be included in the AI project, a species must have at least 20 research grade observations from 20 different observers. I don’t know if the AI counts against one observer providing most of the data (for instance a species with 20 observations from 20 different observers plus 500 more observations from a 21st person). For a species to get included once it reaches this eligibility threshold should take as long as the time until the next update. This is not a daily or continuous thing: I remember reading (somewhere that I cannot find now) that the model is re-trained on the latest data set from time to time (every month maybe??).


If this is really how the AI is done, then it really explains why it is so poor at recognizing insects.


Exactly. Many insect observations won’t get species ID’s for research grade, and a number of the ones that do, shouldn’t.


20 by 20 - no wonder we battle to get IDs for Cape Town plants. We might never reach 20 x 20 for plants that aren’t easily found and frequently seen.


I get way-out-of-range plant name suggestions from AI on iNat when I post something, and obviously, some of the posted observations by others who are just learning are posting with those obviously-wrong names. Could a geographic filter be used along with the AI so suggested names would be from the ecoregion?


More than 70% of the species that have 20+ research grade records also have 20+ observers

As stated in the reference article, it is an intentional threshold to prevent user bias.


I don’t think this is how it works anymore. There was a significant change in the computer suggestions within the last couple of weeks or so. At least in insects, they have become WAY more sensible. We now see many more family and genus suggestions than before, as well as species suggestions for things that would definitely fall below the “20 from 20” threshold. There are still quite a few mistakes, some of them novel ones, e.g., a lot of North American Condylostylus turning up as Parentia which does not occur in the New World. But generally the more conservative suggestions are much easier to stomach, and it looks like there should be more opportunities now for correct human IDs to make a difference even if they are not species level IDs for commonly observed organisms.


I know one person in Oklahoma said her moth algorithm ID results got a lot worse than they were before the update, but overall most people seem to be reporting positive results. I haven’t noticed a change yetm but haven’t used it a ton since then.

1 Like

Even if you won’t personally push the AI over the threshold for oat crown rust, thank you @Megachile (Adam) for doing what you’re doing. You’re probably make a lot more folks (in your area) aware of this species, which will lead to the 20x20 success.


Welcome to the forums! This is a great question and already the topic of much discussion and development interest:

And for anyone interested in all things Computer Vision, the following forum search pulls up a long list of other related threads:


This is definitely good info to have, thanks. There still seem to be some mysteries, though. Oat Crown Rust passed that mark a while ago (before the beginning of 2019) but still isn’t getting suggested. And in the case of the Four-Lined Plant Bug, the AI is very good at recognizing the insect itself–it just doesn’t add the feeding damage. Since the AI doesn’t know what’s in the photos, it doesn’t seem like the 20 minimum would apply to new lifestages/evidence of that species. Unless there’s another layer of anti-single-user-bias protection, my pictures should be able to contribute toward recognizing the feeding damage distinct from the insect.

Sounds like someone could host an event with 20 guests, and see what results.

The threshold has changed since the initial iterations of the computer vision model. From the iNat FAQ:

Which taxa are included in the computer vision suggestions?

Taxa included in the training set must have at least 100 photos, at least 50 of which must have a community ID. As more observations are added and more identifications made, additional taxa can be added to the computer vision suggestions. This means your observations and IDs work to improve the model!

I wonder if that means 1 observation with 100 photos could be included? :)

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.