Yeah. I’ve found a few species where the CV wouldn’t suggest a locally abundant species because I happened to be just a couple hundred meters outside the model’s grid cell on the edge of the species’ range. While I agree with you that I like the idea behind this move, I also agree that ‘seen nearby’ and ‘predicted-by-an-elevation-model nearby’ are really separate categories.
It does feel like this is the case, but I don’t know if it truly is, without being able to see a real comparison between “seen nearby” vs “expected nearby”.
I found a group of Corvus frugilegus sitting on a rail way portal in Coevorden/The Netherlands/Europe and all kind of exotic birds were suggested. So yes, I have the idea that this is correct:
‘‘This is a bit of an issue as a whole bunch of species that are not found in the local area have now been added to the CV ID.’’
I retried it with the website but I got all kind of local species returned.
Maybe it is only an issue with the Android app? @earthknight AndroidApp or Website?
The “expected nearby” relies on different data than “seen nearby”, and casts a much wider net as a result. This means that in some areas a bunch of species that are actually not found in the area re now included in the CV results.
I suspect that this varies by region a bit. It’s probably less of an issue in large relatively homogenous areas, but there are large parts of the world that have significant non-homogeneity and in those areas the “seen nearby” system yields more accurate results.
This is why it would be useful to be able to toggle between “expected nearby” and “seen nearby” results.
I’ve noticed a huge drop in the quality of suggestions since this was implemented, unfortunately.
Today I posted a Western Azalea and got an “expected nearby” suggestion for Golden Azalea… the nearest observation of that species is almost 800 miles from me, and is not even RG. The nearest RG observation is in Ireland (and I am in California).
It’s definitely weird that Rhododendron luteum is expected nearby in Marin. The model was trained on month-old data, so maybe there were some uncorrected observations of it when training started.
I suspect that in this case the photo has a lot to do with it as well. I submitted a cropped photo of your photo from https://www.inaturalist.org/observations/188350095, with leaf detail featuring more prominently in the image, and with the same date and location and here’s what I got:
How often are these going to get retrained? Just curious what the turnaround on that will be to account for corrections being made for out-of-range observations. Is it going to be with every CV update, which now seem to come out on a monthly basis, or less frequently?