The Kermes Scale Moth (https://www.inaturalist.org/taxa/324633-Euclemensia-bassettella) is a very distinctive moth with 137 observations from 76 observers, yet the computer vision seems to never suggest this species for an observation of the species. It’s never suggested it for any of my observations. I tried it on several observations for other observers with no success. Why is this?
I don’t know very much about the process, but I believe a species has to reach a minimum of 20 Research Grade observations, all coming from different observers, before that species is submitted to the AI. And even then it may have to join a waiting list until the next big batch is submitted. Obviously the more observations the AI has seen, the better it is at ID-ing it.
The thresholds actually changed with the latest rollout of the models earlier this summer: They are now:
- 100 photos submitted
- 50 of which have a community ID
- the documentation no longer states if or what the number of distinct observers must be
That being said, assuming it is actually in the model, I can’t speculate why a particular species is not being suggested. It is possible the species does not meet the 100 photos criteria, or did not when the most recent training batch was run. Yes, there are 140 observations (as i write this), but if you look at records from before say 2019-04-01 (I dont know the exact date they pulled data for the most recent training run) there were only 83.
Is there any way to tell which taxons have computer vision models and which ones don’t (other than submitting photos of them for identification)?
None I am aware of. Anything way over the thresholds mentioned above should be in there. Theoretically I guess you could open confirmed observations (not needed to submit your own, you can leverage what is there already)with good quality pics and run the computer vision to see if it comes up, but that’s awfully labour intensive, and still not an absolute confirmation of status.