I’m not seeing that option on my end for some observations (below), but I do see it for other observations.
Yes, in that case there aren’t any close matches so there’s nothing to filter.
Which is interesting in itself, because shouldn’t there at least be something visually similar nearby, especially if the model can pick any species of plant or animal? Going back to the first pic I uploaded, it selected an octopus, which would not be seen within 100km…
Also, where pfau_tarleton used the compare function, it selected colourful birds from America, some distance away :D
I had raised this issue in the model thread here:
And speculated there on whether there could be strategies employed during the model development process to help reduce such outliers.
In terms of the user experience, I think part of the issue is that there are no confidence ratings displayed along with the suggestions. It appears that what we are seeing in many cases is a very sudden drop off in confidence rating when there are no other options available to display and you end up down into the “noise” so to speak. Now, it must be that iNat already excludes results below a certain confidence threshold, since you only get shown a few option in some cases. So one option would be for them to simply increase the level of this threshold. However, that also risks excluding useful suggestions for species that the model is simply less capable with. (And similarly for more complicated options such as adjusting the confidence threshold relative to the number of results returned – it is difficult to get a “one size fits all” solution).
As an example, BirdNET shows a bar chart of the different probabilities in their bird call identifier (would be great if that could be integrated with iNat BTW!):
Though I think a (e.g. %) number value would be simpler to implement and understand as part of the iNat suggestions list.
Displaying these values allows the user to either ignore lower rated ones in those cases where there is clearly already good suggestions to consider, while still being able to view them in those cases where none of the higher rated suggestions are providing a good match. Often when I am submitting a species I am not overly familiar with, I will have a look at a number of the top suggestions and their taxonomic trees to try to get a feel for what might be a reasonable first stab to say tribe or genus level at least. It would be helpful to be able to see in the list at what point the confidence rating starts to drop away more steeply to help decide how many from the list are really worth a closer look.
There’s a browser app that shows % of iNat suggestions.
i don’t think these are supposed to be thought of as “confidence” ratings, but these scores do provide some additional insight into how the computer vision is ranking its suggestions.
When I first joined, a domestic dog came up for me as a South American Tapir.
It’s also really annoying when I have a photo of a plant that I don’t really know what it is, but it suggests an insect that is barely visible in the photo.
when you’re first loading, you won’t have yet provided any identifications for your observation. so the computer vision will make suggestions based on all organisms. but if you want a plant suggestion, you can load the observation initially as a plant, and then after upload, try the computer vision again, and it should limit its suggestions to the observation’s current iconic taxon – in this case, plants.
when you’re using the Compare tool, i believe you can limit your suggestions by using the taxon filter at the top. so filter for only bugs if you don’t want bird suggestions for your bug.
As I started with SEEK some years ago, and was trying out… I took a picture of a black and white cat, which came along as I was walking in the neigbourhood…
SEEK suggested a certain penguin. laugh
So I made some jokes about having penguins in the neighbourhood the next days.^^
Thanks for the link! The extension no longer appears to be available, but I will ask about it in that thread.
And well I prefer “confidence” in being suggestive of the model’s confidence level as opposed to “probability” as BirdNET uses which suggests something more objective… Maybe “score” could be considered even more neutral and less open to misinterpretation? In my mind it should just be a small displayed number without any tag associated with it, such that those who have use for it can see it, and those who are not looking for it will hardly notice it at all.
For my workflow - in Identify - then pull up CV suggestions - have the same toggle option at the bottom of the list (if I open a new tab first) I am constantly flipping between Nearby and Visually Similar, till I get a (broad) ID I am confident of.
A bit like looking for a jigsaw puzzle piece … with a bit of the brim of the red hat … and when you finally find the piece that fits, the red is in the shade and doesn’t look red at all.
@bsteer I use that extension.
Pretty sure - visually similar - seen nearby - and the quickest way to get to the ID that I already know.
PS clicking thru I see 404, but it still works for me.
@sessilefielder says the code is at Github - maybe someone (like @pisum ?) can activate it again?
Had a weird computer vision suggestion while uploading today. 2nd image of this observation, which shows a species of Crambid Snout Moth, is returning with suggestions of Siberian Elm (Ulmus pumila) and House Wren (Troglodytes aedon). The other images gave related moth taxa, since this taxa only has ~56 observations right now.
if you begin the upload process again using the same photo, do you get the same suggestions?
Yes, I already tried it twice. And it’s after indicating the location and date. Same suggestions. The other two images work better, but the images really aren’t that much different in my eyes.
hmmm… the reason i asked is because i couldn’t replicate the issue. i downloaded the “original” version of your image and started an upload with a location nearby, the computer vision suggested all moths. you might need to send your actual original file to help@inat so that they can try to reproduce.
It seems to be the problem of the CV algorithm integrating both the date and the location. I usually don’t check for suggestions until after I’ve entered both the date and the location. So I tested the permutations and that seems to be the problem. Strangely it’s a problem with this image and not with the other two images of that moth, so that too is confusing.
earlier i was inputting a location, but i see that the difference between what you did and what i did is that you’re including only suggestions that have been observed nearby, and i was getting all suggestions. so my suggestions looked more like your left-most item above.
i believe “nearby” is defined by this logic: https://forum.inaturalist.org/t/range-covered-by-the-seen-nearby-feature/2849/5.
since none of the suggested visually similar moths have been seen nearby, you end up with only the non-moths when you filter for seen nearby.
i checked your first photo, too, and that one happens to match a Siberian Tussock Moth, which has been seen nearby, but all of the other moths get excluded by the nearby filter.
so i think the computer vision is working as designed, but you might have to include all suggestions, not just ones seen nearby, if you’re trying to get suggestions for things in places where those things (and similar things) are unlikely to have been observed nearby.
tiwane elaborates a little more on this above: https://forum.inaturalist.org/t/some-interesting-ai-suggestions-of-late/34847/17.
Good to know. Yes, I tend to have “nearby” checked so I don’t get suggestions from Australia and New Zealand. It would be nice if “nearby” was controlled by a slider for distance. The binary appears to be between “fairly close” (100 km?) versus “anywhere in the world”. It tends to work rather poorly when in places that are isolated or have less than a few hundred thousand humans submitting observations.
I’m fine - it has been working beautifully for me since you made it.
Your new fans will be glad!