Anyone else noticing a big increase in outlandishly bad IDs?

It seems like there’s been a major uptick in really outlandish computer-suggested IDs, at least in the California observations. Stuff that has never even been recorded on the continent as IDs on common local species… has something changed recently with how the computer vision suggests things? Or do we just have a sudden influx of very inexperienced users?

1 Like

Could be duress users but this seems unlikely since you’re now in summer holidays in the northern hemisphere.

Noticed it yesterday and today with two Florida IDs

Well to be fair, the first of those, the right ID is the first option listed by the computer vision. You cant really legislate for folks going down the list and picking a lower ranked option, and the 2nd one is a pretty dire picture.

I’ve noticed this here in New Mexico
The AI no longer recognizes Southwestern White Pine
And tries to ID it as Jeffry Pine or Easten White pine neither or even close
The others concern small mammals

I’ve given up using it for bugaroos

I noticed today that I was getting suggested species that were from Europe and Asia on North American species.

Can you provide some examples? I haven’t noticed any uptick in California but I haven’t done much IDing the last few days.

This isn’t California but this is one
That the AI was getting right about 80 percent
Of the time now I’m lucky if it even suggests it
It’s the only one here

A couple that have particularly caught my attention:

California mallards are often getting suggested as Pacific Black Duck, despite that species never having been observed in the Americas:

Or suggested as Pacific Black Duck x Mallard hybrids:

Honestly I think a lot of these issues could be resolved if the AI weighted documented range a little higher when suggesting species options.


my understanding is it doesn’t weigh it very much , other than to note what was observed nearby. There have been lots of requests to limit it to near past observations so it doesn’t suggest things so far out of range (which i agree with too fwiw) but so far it hasn’t happened so maybe the devs don’t want to?

1 Like

As someone who needs to dig through the planet to get to the US, the automatic suggestion has always been hilariously bad with species suggestions from the Americas often springing up.


I do a search on species that often get a wrong ID regularly. For example Silene colorata (a mediterranean plant) in Northern Europe. Macroglossum stellatarum (a eurasian species) in Northern and Southern America. Cardamine californica in Europe. Viola sororia (an american species) in Europe, etc. There was an increase in wrong IDs this year, maybe due to an increase of people using the App. There seems to be clusters of wrong IDs, too. I get the impression that people get very confident in an ID when there is an observation with the same ID nearby.


yeah i have definitely noticed this

Yes, they do seem to appear in clusters like that! Once there’s one bad ID in the area more seem to follow immediately. I try to keep an eye out for out-of-range observations on the species I know well, but it can be overwhelming trying to keep up.


I haven’t noticed an increase, but in general, I think the computer-suggested ID’s don’t factor range in enough.

I don’t know how much data iNat has on range distributions, behind the scenes, but…from a plant ID standpoint it makes zero sense to recommend an ID for something that is far outside its range.

These types of ID’s are ones that experts would make with great caution and reluctance, after carefully checking to exclude all possible confusing taxa.

It’s very different from suggesting a common species in the heart of its range.

Range is one of the most important things when narrowing down plants to ID.

As an example, I found several reports of Conoclinium coelestinum WAAAAY southwest of its reported range, in areas, incidentally, where Conoclinium dissectum is native. I don’t know whether or not these were auto-suggestions…but…

…I did encounter a Pokeweed observation WAAAY outside its range, and it clearly wasn’t pokeweed so I disagreed with the ID and in the ensuing discussion the OP commented that it had been auto-suggested to them.

I feel strongly that I do not want the auto-ID to ever suggest plants outside their recorded range. Allow people to report them, yes, but do not auto-suggest them to someone who is just casually trying to ID an unknown plant. And I think it would be worthwhile for iNat to report, at the time of ID, that the report is outside range or is unusual. These are special records that I think require more ID care. It’s a little like how eBird handles these things…they flag the observation and say “you have made an excellent observation…we need more supporting evidence”.

Is there anything in place like this currently? I.e. is iNat factoring in range maps, and just…doing it inadequately? Or is it that it isn’t factoring in range much at all?


These questions and comments in general probably fit more in than in this thread specifically about recent changes.
(yes that may apply to my comment above too, sorry about that :). )

While I agree this may be good for users who don’t know to check distribution, there are some situations where it’s good to see recommendations for visually similar species not previously reported in an area.
It’s happened to me a few times that I observed something which hasn’t previously been reported on iNat in my country (or all of Eurasia…).
It was helpful to me to see what the recommended species were, so I could at least try to narrow my find down to Order or just… better than “so, this is a plant :woman_shrugging:”.
Additionally it may make it more difficult to track new introductions of invasive species.

Perhaps these issues aren’t really relevant for experienced users, and it’s better to favour a newbie-friendly recommendations AI which ignores way-out-of-range recommendations.


This makes sense…but…if this is the use of it, I think it would be much safer to present the information to the user with some sort of disclaimer.

Also, what you describe, while a great way of approaching it, is not how all users use this information. And I don’t see enough in the UI that even gently discourages or guides users away from using it this way, and guides them towards using it in the way you suggest.

Currently, the suggestions say “Visually Similar” or “Visually Similar / Seen Nearby”. A conscientious user might start noticing these flags and start noticing when something lacks the “Seen Nearby” addition, and start doing a little more thorough research before selecting these from the list. But this requires more effort.

I’d rather our UI steer the more casual users more in the “best practices” direction.

So for example, perhaps it would be an improvement if the cases where something was out of range would say something like “Visually Similar / Out of Range” or “Visually Similar / Not Reported Nearby” or something of the sort. It could also use the word “but”, or it could color the text orange, red, or yellow, or some other color that has a connotation of “Slow down / stop and think”.

1 Like

I can confirm we haven’t changed anything with computer vision in the last month. The last changes were described on the blog.