I tried to get some, but as I mentioned, when I went back & re-uploaded the same photos, it was working as normal.
@alex hereās a fun one, a crop of this observation, which seems to pretty clearly be a nematoceran fly.
only nearby:
not nearby:
Date & location are the same as the original:
Hereās the original cropped image:
thanks for this @sessilefielder
in this case what seems to be happening is that the vision model is suggesting a few types of nematoceran fly that arenāt seen nearby, but with a relatively low confidence. the combined vision + geo score isnāt really helpful because there isnāt much meaningful overlap between vision and geo (the taxa it kinda looks like arenāt really seen nearby, the taxa that are seen nearby it doesnāt really look like).
we have some ideas for addressing this that weāre already working on, so hopefully cases like this wonāt be a problem soon.
Is there any data available regarding the accuracy of the CV? e.g. how many IDs that are sourced from the CV actually get agreed with, make it to RG etc. Iām sure there are some more appropriate metrics but I donāt recall ever having seen any objective measure of effectiveness. I have always found it to be largely garbage for NZ spiders but that may just be personal experience. I have always assumed other taxa may be luckier. But is there any real proof that CV is a worthwhile feature rather than just a data damaging distraction?
It is unrelated to āluckā; the more training data the model has for a given species, the better it will be at recognising it. It is inarguably a worthwhile feature. It is very accurate for many many taxa, including but not limited to, birds, many fish taxa, many plants, Australian moths etc etc. Your case does not reflect the global status quo. From a very rough eyeball, just 25-30% of uploaded New Zealand spider species can even be offered as a suggestion by the CV based on their number of observations reaching the required threshold to enter the model. So the only way for there to be better suggestions is for NZ observers to upload more records of spiders, have them IDāed by yourself and other experts, and have them enter the CV. The same concept applies for many taxon/area combinations globally.
I am well aware how CV works. I am asking for stats about how well it works. It is not an inarguable feature without presenting an argument based on data. So if the data is available, please provide a link to it.
I donāt have hard data, but rather my personal experience/anecdotal evidence (the same as your spider example) from my IDing experience :)
Ken-ichi presented on our computer vision system at TDWG a few years ago, and he talked for about ten minutes on the question of āis it any good?ā.
The relevant part of his talk starts here:
https://www.youtube.com/watch?v=xfbabznYFV0&t=1755s
Iāve experienced that it works so much better now than just a couple years ago - at least, in California where there is lots of activity.
Except, I get very different results in some modes of Identify than other Identify modes. modes.
Thanks, I will take a look.
id ID that dog as nothing more, than a certifiable, good boi,
Iām still finding the CV is giving very erratic and random results. Itās true Iām not in an observation hot-bed, but one thing I would at least think would be reasonable is the following process:
Detect species from nearby DB: nothing found ā
Detect genus from nearby DB: nothing found ā
Detect subfamily from nearby DB: match ā recommend
Not found ā move to detect not nearby, then go up to subfamily/family that matches nearby.
(or perhaps even the other way where it first detects/confirms family/subfamily match, then tries to match deeper)
The way it currently works is to completely ignore any family likeness and go for random species and genuses based on I donāt even know what itās so random.
Iād rather a stick insect be detected as Phasmida than Blue Ringed Octopus or Pacific Baza.
On a related note, is there a way to turn off CV suggestions on the iNat UI?
specific examples would be great, thanks!
in the mobile apps you can disable suggestions on the settings screen, the setting is called āAutocomplete Namesā
on the web you can just simply type into the āspecies nameā field and the suggestions will be replaced with autocomplete for the text youāve typed in. (this also works on mobile)
Iāve got some uploading to do later tonight, so Iāll grab some screens. Itās very common so should be easy enough to get a few examples.
thanks!
Thatās a good video. Thanks for linking it.
And if you tell CV - Diptera - then look at the CV options for Diptera?
Diptera - visual match and / or seen nearby.
@alex screenshots of odd suggestions. In these screenshots, clicking ādonāt show nearbyā will usually return something closer, but not always. As my latest obs werenāt from around home itās a bit different again and seemed slightly better, so Iāll also find a few next time I upload backyard observations. These ones are actually reasonably good compared to some Iāve seen at home. Iām aware itās not offering ārecommendationsā with these, but that doesnāt matter for novice users as they just go with what is presentedā¦
Here are some examples with and without āshow only nearby.ā
Following on from my previous comment above, my question here is that when showing non-nearby guesses, it at least picked a spider wasp (among some other wasp guesses), so if that has higher confidence why does it not then suggest Pompiloidea or similar for the nearby one instead of suggesting nothing in the entire hymenoptera?
Previous threads:
https://forum.inaturalist.org/t/photo-gives-surprisingly-bad-ai-suggestion-when-location-is-specified/36840
This one has some examples from my home observations:
https://forum.inaturalist.org/t/some-interesting-ai-suggestions-of-late/34847