What is a good percentage of supporting versus leading/improving IDs?

Looking over many Year in Reviews for 2025 I have noticed that the percentage of supporting IDs for power identifiers varies from a low of 22.7% to a high of 98.6%(with that user having a 10,000 identification day), with the most common range being between 65% and 85% supporting. I can understand if the supporting IDs are taking Needs IDs to RG(for really high percentages), but if all the supporting IDs are of already RG observations it seems like climbing the leaderboard. Is there anything inherently wrong with supporting IDs? I’m concerned that my percentage(68.5%) is “too high” even though it is nowhere near my 97% from 2020. What are your thoughts on this?

4 Likes

For context, the overall Inat supporting ID percentage is 76.0% for 2025, 74.7% for 2024, 73.2% in 2023, and 72.6% in 2022. Clearly an overall slow upward trend in supporting IDs.

2 Likes

Is there anything inherently wrong with supporting IDs?

No. There’s not a percentage that’s good or too high. Everyone is identifying from different skill levels to different ends: someone might have 100% supporting IDs as they learn to identify new organisms, or 0% if they’re working at some other extreme end of the scale. (Someone who exclusively works with mis-ID’d observations?)

I would ignore the metric entirely as anything other than a bit of interesting fluff.

28 Likes

Agreed, it more speaks to the sorts of things you identify than the usefulness of your IDs. If you’re IDing North American birds, most already have a correct ID present when you come to them, because the CV is good with them as long as the picture is decent. If you’re IDing micromoths, most of them lack a correct species-level ID when you come to them, so your ID will be “leading” more often. Some days, I straight up filter to only ID moths at Order level, to try to find oddball stuff that has stumped people. Those days, I have zero “supporting” IDs. Other days, I decide to check an easily misidentified species for mistakes, and I add hundreds of “supporting” IDs very quickly as I confirm the correct IDs while searching for mistakes. Neither of these ID practices is any more or less useful than the other.

21 Likes

I totally agree with the answers given, do not worry about those percentages at all, but do ID what you think is useful and fun for you.

I recall there is some browser add- on floating around that shows you quickly those percentages of an IDer when you hoover over their names.. the idea behind it seems that an ID with higher percentages of leading or improving IDs is probably more skilled..

I could not disagree more and personally think those stats (while I always like diving into my personal stats) are not very interesting. It would be a pity if easy to RG observations would be left there hanging in needs ID limbo just because some IDer feels pressured to to add not too many supporting IDs to pimp their stats.

I personally made effort this year not only to dive through higher level taxa to refine them (which often feels most rewarding to me), but also to take some species level taxa and clean them off of the needs ID pile (which also is quite rewarding actually, seeing this pile shrinking)

11 Likes

..and this can be read in different ways.. one is that the CV got better for several taxa und thus it is easier to agree with the CV suggestions the observer might have chosen instead of needing to correct them…

..another is that IDers doing taxa sweeps (e.g. going through a certain needs ID species batch) can be rather fast using the shortcuts, and if you have more observations in such a batch (as we have more observers and also accumulation over time) you can do a lot more of those

Edited for clarity

5 Likes

There is nothing wrong with supporting ids.

I want my observations to have several supporting ids, to confirm or correct and to better guarantee the id will remain RG if someone withdraws or even worse, removes themselves from iNat.

9 Likes

yes this is different from users adding supporting IDs, currently it is separated for individual taxon leaderboards (leading/improving IDs and not supporting IDs push into taxon leaderboard page one sees on bottom right of observation pages) but not global leaderboards. There are cases of people who fall into latter category and are clearly fallen into leaderboards trap but identifying them is not dependent solely on these numbers but how they ID overall and how they interact.

An identifier who makes any kind of ID (leading/improving/supporting) but is able to interact back is better than IDer who never responds when there is disagreement (yes there is cognitive load of keeping track of notifications and inactive users but you will see the cheating users fall into this category consistently = ignore tags on corrections+mass ID already RG observations+being active). some IDers do breeze on taxa they are most confident of and add supporting IDs that will turn them to RG but not on already RG obs.

Anyway I have seen a temporal trend in decent identifiers: most new iNat users start with supporting IDs (even if its RG observation, maybe they are trying to learn and ascertain their stance) → then they dont touch RG obs to save their time unless they feel the need to correct → then they move to higher levels and move IDs down which are stuck.

3 Likes

What sort of user qualifies as a “cheating user” by your definition?

as mentioned. maybe “majorly mass ID RG…”

Which I wish that Inat could separate supporting IDs into both RG supporting and supporting to get a Needs ID to RG.

1 Like

It never occurred to me that anyone would want to cheat at citizen science work.

2 Likes

The top identifier on Inat almost exclusively IDs common bird species globally that are already at RG.

3 Likes

Whoa! I had no idea folks were doing that. Wonder what the goal is? Some sort of naturalist clout?

People sometimes cheat in order to compensate for inadequacies in other areas of their lives. Their health might be not great, their relationships might be struggling, etc., but they can win at something! Even if it’s a somewhat hollow victory gained by cheating.

2 Likes

Individually having a high supporting ID percentage doesn’t really bother me as long as the quality of those IDs are good (defining what makes a good ID is another question). I’ve gone through times where I thought that the best I could do is make sure all the Research Grade observations going to GBIF actually were what people thought they were and this produced a lot of supporting IDs (my highest percentage was about 80% of ~51,000 identifications in 2022).

What I think about the percentage depends on the number of identifications you add and how much time you spend. If I reached a number above 90% for about 50,000 identifications given the amount of time I have spent in the past, it would make me wonder if I am actually correcting the incorrect identifications I see or focusing so much on identifying easy organisms that I’m not learning or teaching enough. There isn’t anything inherently wrong with a percentage this high but I don’t feel like “I” would be contributing the community in a meaningful way if the percentages for this level of identifications got much higher.

As for this,

it does seem a little concerning, but I’d encourage you to go back further. 2020 and 2021 were a low point for this percentage (70-70%) and it increases going back in time and forward in time. In 2014, 79.42% of the 469,634 identifications where supporting. It’s also worth thinking about what this means in an example.

These percentages applied to a single observation, would yield three extra identification after the leading identification has been proposed. Admittedly, this seems high for me. It would be if two other identifiers came along and hit the agree button. That said, I work in a study system where RG is usually reached from me proposing an identification and the observer just agreeing with me. For most observations, there aren’t enough other identifiers present to confirm the ID. And honestly, if I were working on a group where there were many others who actively worked to confirm identifications, I don’t really feel like I’m learning enough. So, my perspective as a plant taxonomist is a bit biased.

Overall, my IDs and percentages fluctuate. The fewer I add, the lower the percent supporting, so any judgement made about these percentages should be contextualized by the overall number of identifications. Ultimately, the more I add, the higher my supporting ID percentage regardless of how many high-quality identifications (correcting with helpful comments) I make.

5 Likes

Could some of this be explained by the CV improving? Observations with incorrect or overly coarse initial IDs are much more likely to get leading/improving IDs than observations with correct species level ones. I also think there are other factors here, like a northern Cardinal is probably way more likely to just get one supporting and no leading/improving IDs than a fungus. I wouldn’t be surprised if these numbers are reflecting more about trends in how people observe than trends in how people identify.

this could then maybe be explained not by CV but by iNaturalist having a much more specialized/niche userbase back then that was considerably more likely to know the species of the observation they photographed from the get go.

1 Like

yes seeing the species count and user count graph in iNat year in review supports that - less options (even if wrong supporting IDs) were picked back then by less and niche users who are more active across site - it is also seen now when one notices ID corrections happening in recent years back on those older supporting ID observations.

i think it has bigger caveat than now - a user cheating fallen into leaderboard trap and adding IDs only to existing RG observations forever is fine mostly to platform quality - until there is major taxa revisions and niche taxa.

while the opposite of separating it and using this metric on users adds other incentives - the same cheating users may focus on needs iD observations and blindly agree (this still happens in India as i have seen) - then it will become 10x messy for everyone in both corrections and finding those continuously and especially flywheel effects of AI learning the new wrong distributions and photos and on and on.
I have seen cases where this flywheel effect is so strong (the latter IDers and observers believe its valid ID because of all those existing wrong IDs and map points) that someone blocked me for pointing it and correcting their IDs lol

3 Likes

My supporting ids make up 67.3% in 2025 and I don’t even know where to look for a leader board, so no, I wouldn’t say 68.5% is too high.

2 Likes