Completely inaccurate species suggestions

I’ve noticed recently the inaturalist app on my iPhone can’t identify rabbits very well and some other things it struggles with, but some things are ok. Here’s a screen shot of my iPhone.

The website works fine, however


Similarly, I have been noticing for a couple of weeks that the CV on my android has been suggesting random IDs that are not even the right kingdom. Eg. Bumble bees, which I ID a lot, has the suggestion of a bird. Ot used to get these accurate even when the bee was a tiny dot in the photo. I’m sure the AI has got less accurate recently. When was the last release?

Thanks for the examples, but without the orginal photo for us test with, knowing the location of the observation, and knowing whether or not you entered a location before getting a suggestion, there isn’t really anything we can use for an investigation. If you can share those, then we can take a look.

Note that location data is used when providing suggestions so the suggestions may be quite different depending on whether or not you’ve added location data.

Anecdotally I’ve added dozens of observations in the past week, using both iOS and the web, and haven’t seen anything completely inaccurate.


Location first. And also a broad taxon to point CV in the right direction - this obs of a bee on a flower … is for The Bee, not the flower.

The example I have given is for identifying observations of bumble bees in New Zealand. I filter on Location = New Zealand and Genus = Bombus. Mostly the bumble bees have accurate suggestions like B. terrestris or B. ruderatus as first and second choices. To have a bird suggested as second option is weird.
There were many other instances I noticed but this was the one I remembered. I’ll post some others if I find them.

1 Like

picking up on the orange stripe for a blurred picture of that swallow ?

1 Like

I agree it should be fixed, but identifying a bat as coronavirus is kinda funny. :D

I think some people photographed their test results.

1 Like

3 posts were split to a new topic: Unexpected species suggestions

It has happened again. Out of 12 observations to be uploaded one had suggestions that were widely off. Very irritating.

I always put the locations in first.

I mentioned this problem on Librewolf on Linux in July 2023.

Had to give up using Librewolf and went back to Brave.

1 Like

Like I already said, setting the location makes no difference. The suggested species stays the same - bats, human, covid, trees, pretty much anything. Setting local just narrows it down to nearby species out of those.

But sure, here’s one original photo if it helps any:

Edit: Well, that’s interesting. The photo shows up fine on my machine, and also in my observations. What’s up with the above? Well, here’s the original photo on imgur:

And here’s what it suggests when I upload it, no location set yet:

1 Like

The first photo is indeed abnormal, even though the jpeg structure shows no anomaly. Did you use a software to process that photo?

When uploading the ‘good’ bird photo (from Imgur), to me it correctly suggests birds (Turdus etc.).

Nope, original directly from the camera. And since I’m using three of those, the camera isn’t an issue.
Guess if that’s what the AI sees on my uploads, it’d explain the random suggestions. But it does look fine on my end at every stage of the process, and ends up uploaded fine to the observations.

Gonna try and see if an edited (cropped) photo does the same, since the suggestions are just as random for this:

Edit: Okay, I think we found the issue? Same on imgur:

Alright! Figured it out. My browsers were blocking the HTML 5 Canvas fingerprinting, and evidently the uploader needs that for… something? As soon as I disabled that for this site, the suggestions started working.

Would’ve never guessed without seeing those messed up uploads… Case solved.


Librewolf offers some anti Canvas fingerprinting options as well, so it makes sense it would have the same issues.

So is this bug getting fixed? Tracking fingerprints should be… optional.

I don’t think it’s clear exactly what the cause is or that this is a bug (ie, it seems like the system is working as intended). There’s legitimate info that can be obtained via html 5 Canvas that isn’t necessarily used for nefarious purposes. I think most users understand that user browser-based blocking (whether ads, elements, fingerprinting, whatever) is going to break some website functionality and they may have to whitelist/customize some things to get the functioning that they want.