Gamify accuracy? Award value to quality, not just quantity

Ahhh… I understand that now. I still don’t see why many identifications is a bad thing, but I concur that agreeing to an observation with many IDs is not very useful.

Agreed, agreed! Even wrong IDs can help lead the way to correct ones.

Wow … interesting. I feel like kids and others who might mis-use iNat like this would be doing gaming or some introvert-type thing. Now, maybe I just don’t see them. I have agreeing-IDs off in my notifications :-)

Curators should give those people a warning. But making their agreeing identifications ‘less valuable’ on some scale wouldn’t stop any troll. You’d have to make an iSpot-like reputation system, which the staff have already expressed their distaste for (and frankly, I would not like to see that implemented either).

1 Like

The mentioned IDer is not a kid ;-) Actually he presents himself as a serious expert. I started to doubt only after his 500+ IDs in various classes of a very large phyllum, where it is not possible to be expert or even moderately knowledgeable in all groups.

1 Like

Warning to whom? Misbehaving users or unwilling IDers?

This is a good point. I´m also throwing stuff into specific families at times as I know it will be seen by someone who will tell me one way or another if it belongs there.

I think there´s many ways the algorithm could work to define “accuracy”, and whatever it was, it would need to take this and many other aspects into account. It certainly wouldn´t be able to be as simple a count as observations and identifications.

Maybe accuracy isn´t even the specific goalpost thats needed…just some measure of quality control…or a reward system for different actions which help motivate this.


There are some people who hardly ever give a first ID at species level - they mostly follow pre-existing IDs. I trust them less than those who dare to be leading with their IDs.


Ah yes, I liked it, so I must have seen it but forgotten.
I´d vote for this feature! Or some version of it.

My issue isn´t sooo much with the leaderboard chasers…I am in general, pro-gamification.
My issue is with the accuracy overall and the perception of the iNaturalist data by folks I interact with outside of iNaturalist. The noted 65% level accuracy in insects could be better!
I think gamification might help.

Yes - I think this is a good example of the kind of thing that could potentially be measured.

I seriously wonder what adult in their right mind would waste their time this way. :-) I guess we’ve found one…

IDers that are way out of their league or adding faulty IDs on purpose. Of course, misbehaving users should be warned too.

1 Like

Many adults would do that, and waking up to hundreds of agrees to bird observations is something I think everyone had to deal with if the notifications are on. I have no idea why people do that other than leaderboard stuff (but this was already discussed before).


Yes! I also wondered about this.
Another criticism I’ve heard in feedback from UK entomologists was the lack of detail in the data.
There’s currently no acknowledgement of the work that goes into annotating data … I agree gamification could help here too.

I know a few bird identifiers that went overboard, indeed.

Now … if they are really experts, that’s great if they want to be on the leaderboards. More IDs, more accuracy! I wouldn’t care (and I don’t think anyone really should?) if a kid was using a field guide to ID on iNat, because at least he is using a trusted source. Its the people who just click at the ‘Agree’ who are probably dangerous.

Thumbnail IDing is a no-no for anyone who is serious about IDing, IMHO. Admittedly I have thumbnail IDed before but only if it’s a super obvious species.


I’m not in favor of gamifying iNat at all. However, if it comes to that, I certainly would not award points/recognition/etc to anyone agreeing with an observation that is already RG. At least that may help deter those users so hell bent on being on the leaderboard and overwhelming our notifications of observations that previously reached community consensus.


It’s not 65% accuracy, it is 65% plus an additional 20% that may or may not be accurate, you can’t interpret the too precise group as inaccurate. Some may be right, some may be wrong, the point is the experts were not able to validate either way.

I’d really be interested in seeing more about what the experts felt were mis-identifications. For example I have a hard time believing almost 10% of the bird records are improperly identified on the site.

For example, the 50 most observed research grade species of birds currently account for almost exactly 30% (30.36% as I write this) of RG bird records. These 50 species are generally highly distinctive, with many eyes looking at them. I won’t say there are no errors in them, but the rate will be very low.

To then get to an overall 10% error rate, means outside of the top 50 species, just about 1 in 7 research grade bird records has to be wrong, I can’t believe that is right.

FWIW - the distribution curve for insects is not that different, the 50 most observed insects on the site represent 19% of all insect RG records. That’s 50 species out of over 79,000 species with a RG record generating a fifth of all records. And just like birds, most of those 50 are pretty distinctive, and relatively easy to ID.

Unless the dataset in the experiment is weighted to match the distribution of records on the site, its relevance as an error measurement is a little unclear to me.

1 Like

Interesting. Where are you getting these stats from? (The 30.36% for example)
Is there a page I’ve not seen or…?

I can see top species listings at least …
And if I go to Diptera (where I’m active), I see some of the top ones globally…

Lucilia sericata has 8048 observations.
My guess, purely based on the UK observations I monitor - about 8000 of those should be at genus level or are incorrect.

Calliphora vicina and Clogmia albipunctata both have 3800 observations.
My guess, purely based on the UK observations I monitor - about 3500 of these should be at genus level or are incorrect.

The top ten in Diptera are actually more like the top ten worst offenders, and the least accurate of all the species level identifications as %s go, due to AI oversuggest and blind agreement from those who know no better perpetuating the issue.

As noted on the parallel thread… placing birds alongside insects can be very misleading and is not comparing like with like. In UK, we have 620 species of bird but 27000 species of insect. We also have many many more active identifiers in birds than insects.


Incorrect, no.
Inaccurate, yes.

I just went to the respective explore pages

and then typed the counts of the top 50 into a spreadsheet to calculate.


Potentially inaccurate. If a record is RG as Sympetrum sanguineum and an expert suggested it is genus Sympetrum, that does not mean it is not S. sanguineum.

It may or may not be. There may be a good probability it is not, it may be impossible to tell
based on the evidence, but it does not mean it is inaccurate.

Inappropriately precise does not equal inaccurate.

For example this observation, I would have no issue with an expert or other putting it at Sympetrum based on the evidence provided.

But there is a high probability (I intentionally chose a record from my local area where I am familiar with distribution) that it is correct. This species far outnumbers the other alternative locally. It is arguably too precise, it is however not provably wrong.


Well…I hate to argue semantics…but given the relevance to the topic…
I’d say inappropriately precise does indeed equal inaccurate.
And that this is one of the critical issues in Diptera at present…

Again, in less complex taxa the issues are less pronounced.
In your link you have 50/50 chance of being correct…you also allow for distribution and local knowledge …(which the bulk of the misidentifications in the ones I listed will not).
I think it might be a fair call. Not such a big deal at least.

But for the Lucilia I mentioned, in UK we have 7 species.
So thats just a 1 in 7 chance of being correct. A blurry photo with insufficient detail could only ever be accurately recorded at genus. Anything else is just polluting the dataset.

Also worth noting actually, that some of the examples I gave often aren’t even accurate to genus level in iNaturalist observations. The majority of Clogmia albipunctata observations aren’t even Clogmia as far as I know… so they have to be bounced back to family level. Thats more like a 1 in 100 chance of being correct.

1 Like

Looking at the insect top 50 I can see what you mean though.
Maybe there is just more issues in Diptera than elsewhere due to its complexity.

It’s not a 1 in 7 chance. It would be a 1 in 7 chance if the species were equally and randomly distributed at both the time and place of the observation.

No one, least of all me is suggesting there are not groups where there are too many overly precise identifications. I am however taking exception that one small experiment with unclear parameters, outcomes, even inputs (for example were the experts only shown the photo, or also given access to any comments, descriptions, observation fields filled it etc) demonstrates than 35% of insect records on the site (or 10% of birds or any of the other ones listed) are inaccurate.