Has anyone used inat data to determine the level of difficulty to ID a given species?

I was thinking it might be interesting to review observations of a given taxa, in my case fungi, and create an identification difficulty rating for a given species by comparing the data for observations of that species such as how many reach research grade vs not, how many different id’s were suggested, how often was the first guess correct, etc.

Seems like something that would have been done before, so thought I would ask before I spend time poking at it.

9 Likes

Very interesting. You can get an intuitive feel for this by watching every observation uploaded from an area for a few years (I’ve been doing that with Mississippi for about two years). I’ve never thought of quantifying it. I would think “how likely the first ID is correct” would be the most useful of the metrics you mentioned. The number of conflicting identifications is another interesting metric, though I’m not sure it measures the underlying phenomenon as effectively.

1 Like

Agreed, there are many species I know just from watching which are easy and which are not, but there are lots of borderline that I would love to tease out statistically.

you could sort of expand on the effort here: https://forum.inaturalist.org/t/what-things-are-misidentified-as-large-milkweed-bug/12571. instead of just showing how many identifications end up at other taxa, you could also incorporate the number of identifications that end up at the taxon of interest (and maybe at something higher than species).

this approach may be of limited value at a species level because there are many taxa that end up at higher-than-species ranks whether due to disagreements or due to situations where it’s hard to determine what the species is. however, it can be useful at higher levels: https://forum.inaturalist.org/t/recruiting-more-identifiers/2388/294.

if you were to combine this with the earlier approach, then maybe you could do something that makes more sense at a species level though.

to be really useful though, all this may need to be tempered by some sort of geography component, which could be hard do though. for example, a black and white Chickadee where i am is probably a Carolina Chickadee, but in other parts of the US, folks might have to do a lot more work to differentiate between Carolina and Black-capped.

2 Likes

An interesting idea!

1 Like

We don’t have viewer stats / vanity metrics on iNat.
As for example on Facebook ‘seen by 307’

I would at least like to see ‘seen by’ on my own obs - which might prompt me to try for better / more pictures of that next time.

When I am picking thru broad planty IDs, I would like to know if 20 people before me have looked at those images, and said - sigh, well, it’s a dicot at least … If I am the first or second, I have a fighting chance of seeing field marks to recognise.

1 Like

This would also be interesting to map out per region - some species are very easy in one area where it’s for example the only species occurring in its genus. But a few 100 miles further south there’s suddenly 3 other identical looking species and species-level id is impossible from pictures.

1 Like

i’ve been thinking about this, and this might be one way to reduce the number of too-specific ids that get selected from iNat’s CV suggestions.

for species that have a bad rating, or which fall in ancestor taxa that have bad ratings, maybe display only the higher-level ancestor taxa rather than species-level taxa by default.

3 Likes

I know this is a bit old, but we went through all the milkweed bug images and looked at how many were incorrectly ID’d:
https://onlinelibrary.wiley.com/doi/full/10.1002/ece3.10213

Mostly, they were very accurate (~98%), but the common mis-IDs were predictable based on mimicry complexes. So I have continuing thoughts on that! We do a bit more in the paper to look at the number of identifiers and how that predicted identification accuracy.

5 Likes

I tried to get at exactly this question last year by identifying over 33,000 observations of my primary study species of lizards, Uta stansburiana. I found 70 errors in the pool of observations, which equates to observations of that species being correctly identified 99.75% of the time by the iNat community. The misidentified observations had anywhere from 0-6 supporting ID, which means most were already at Research Grade for the wrong ID when I found them. Here’s what a plot of those data look like.

Next, I calculated how accurate the IDs were with a given number of supporting ID’s. These data suggest that an observation can be considered near 100% accurate once you get up to about 4 or 5 supporting IDs. Those data look like this.

I just finished revisions on the manuscript, so these data are not yet published, but hoping they should be sometime later this year.

6 Likes

Interesting and useful study!

1 Like

In my opinion, it could be done only with a series of standardized observations. For some reasons, certain species are photographed worse than others. Maybe, it should be also taken into account that some species are more variable than others.

1 Like

Or, to look at another way, ask about the level of difficulty of identifying iNaturalist posts, which would include both the “real life” difficulties some organisms present and the way the organisms are usually photo’d and reported on iNaturalist.

2 Likes

I’m sure that there is a previous discussion on the forum about creating a traffic light system to represent ID difficulty - but I can’t find it. Currently the simplest way to assess this is to look at the ‘similar species’ tab on the taxon page. the number in the grey circle at the top right of each thumbnail tells you how many times the species has been misidentified as the thumbnail species.

Given that iNat already records this statistic I would have thought it wouldn’t be too problematic to create a traffic light metric based on number/proportion of mis-IDs.

3 Likes

I’m sure I’m missing something here, but in your second plot, am I reading it correctly that the community accuracy was about 99.77% with no supporting IDs? So, does that mean the very first ID by someone other than the observer was more than 99% correct? And that each supporting ID after that increased the average community accuracy by a little bit? Was there enough data to calculate the standard deviation around each of those community accuracy values? I’m not trying to cast doubts on your work, by the way; I’m just trying to understand. Thanks!

1 Like

Yes, that is what is implying…that those observations without any supporting IDs are usually correct. I think this is due to the CV being very accurate with this species. I’ve recently come to think of it this way. When I found those observations with zero supporting IDs, 99.75% were actually this species and not something else. So, not just the CV, but the community is pretty good at recognizing and correctly applying an ID to these. No I didn’t calculate variation around those.

2 Likes

I think you meant “top right”? but just checking.

Good point overall!

Ha! Corrected, thanks.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.