Potential AI Photo Order Bias

Using iNat website with Firefox on iOS

This Example: https://www.inaturalist.org/observations/70291163

I’ve noticed that when the iNAT AI fails, it can sometimes improve it’s identification if you re-order the photographs in the observation. In the example I have included, the Oregon Junco was first misidentified as Sardinian Warbler and none of the other potential matches were close. I then deleted the observation and submitted the two photos again but in the reverse order. This time the iNat AI suggested Oregon Junco as the third potential match. I am not sure this is a BUG, but I wonder if there is a recommendation for how best to order the photos we submit in an observation. This is especially relevant for plants where I often order the photos in an observation: close-up flower, upper side of leaf, lower side of leaf, stem, and a habit shot.

Step 1: Submit observation with two photos and get incorrect ID.

Step 2: Delete observation.

Step 3: Submit observation with two photos in opposite order and get improved ID.

moved this to general because, as you note, this is not a bug, but rather the AI working normally

1 Like

The AI just analyze/utilize the first photo in the observation, that’s why. When I’m posting something, I put all photos separated to get the IDs from all the photos, and then group them together.


As @henriqueandrades said, system looks only at first photo, so it’s easy to see why results are different with different photos as #1.


Thank you @henriqueandrades. Many of my photos have no location attached because the camera I use for birds has no gps. Instead I use my iPhone to take a picture of the image on the back of the bird camera and use that photo for location by grouping them all together. I then delete the iPhone photo of course. But I guess I can just pair the iPhone photo with each of the separate bird photos.

Yes, knowing that, it IS easy to see, and somewhat surprising. And knowing that, I am even more impressed that the AI is able to identify many plants! Usually not all characters needed to identify a plant are present in any one photo.


Yeah, and as it’s learning from our photos, so it’s really good with “typical” photos and fails with unusual ones. In your example system really focuses on that tail, typical pose for Sylvia, so it doesn’t care that other than that and darkened head it’s not one. I know we’re getting new system soon and it should be more about what is seen nearby other than whole globe (?), so maybe it will stop suggesting birds from different continents.


This may not be exactly relevant, but I’ve noticed that the CV is absolutely terrible at recognizing juncos. I have to manually type in the species name more often than for any other bird, no matter how good the photos are.


@hkibak This is a very good strategy for plants. One thing I’d add for a set of plant images: For most plant families (especially Asteraceae), it’s also a good idea to include a close-up lateral view of the flower to show the calyx or phyllaries. These can be more instructive than a top-down view of the flower.

One more thing: Crop, crop, crop! The images of the juncos in your example could be cropped much closer to the subject and that will always help CV to identify a critter.


You don’t actually have to delete the observation, you can rearrange the photos in editing mode and run the CV again. I will usually do this if I’m asking for the CV’s “opinion” on multiple photograph observations.


Good advice about the lateral view of flowers, I’ve only recently begun to do that for most flowers, not just when it “looks good.”
I crop a lot!. Almost all my bird photos are zoomed four steps in on iOS Preview and then cropped to the frame. More than that and they begin to lose definition because my eyes aren’t great anymore and the focus is usually not perfect. Mostly I use iNat as a wonderful GIS, not an identification tool, although not having to type in the names is very convenient.

Haha yes. You are right and I’ve used the photo edit reorder feature many times so no excuse for that one :grimacing:

For CV or even the human eye, a closely cropped but somewhat soft or fuzzy pic is much more useful and identifiable than a more distant image with much clutter and smaller subject. My advice is to not let artistry get in the way of identification requirements.

1 Like

If I can ask, how do you know this is true? I don’t see any reason why the AI couldn’t use all of the photos, and I’d expect the developers to do that, to improve the ID.

This is how it’s explained in the FAQ:

Which taxa are included in the computer vision suggestions?

This has changed over time, but as of the model released in March 2020, taxa included in the computer vision training set must have at least 100 observations, at least 50 of which must have a community ID. Photos for training are randomly selected from among the qualifying iNaturalist observations (that is, it is not only the first image of an observation that may be used for training). Related species are sometimes inserted into the suggestions based on being seen nearby. When using computer vision, only the first image is assessed.


This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.