Overzealous Identification

I’ve been pretty direct in the past (for those newer to the discussions) about not being comfortable with much of the concept of a reputation system.

In particular I don’t like any solution that solely rewards 1st identifications. This is not some small local site with a couple of dozen or even a couple of hundred users. It is a global site with tens, if not hundreds of thousands of active users. Turning it into a race to be the first one to ID something seems counterproductive to me.

Many of the best identifiers on the site do few if any observations themselves, punishing them for that seems inappropriate, rewarding the number of RG observations also will simply lead to people flooding the site with records which run the risk of corrupting the data (hey I saw a flock of a hundred Mallards, so I’ll just add photos of all of them, since that is the only way I can become accepted as a Mallard expert).

I strongly favour the consensus approach the site has designed, not some top down approach.


I would like to see the ranking of an IDer, how many IDs has been confirmed by others per taxon and per country, Or the kind of Grade has a lot of levels till 100% depending on the skills.

On observado.org of the 50.000 users only 25-40 are allowed to confirm determinations…which is, i think , a bit too less.

I was not aware until now that GBIF accepted “research grade ID” data from iNaturalist, which for me mostly eliminates GBIF as one of my sources for geographic occurrence.

I have read the arguments in this and other threads as to why “research grade” = 2 agreeing IDs. I can see the challenge between iNaturalist’s excellent main purpose of getting folks involved in nature, and the secondary purpose of data gathering. But two random people agreeing on an ID for all the various reasons they might do so, does not create data that is usable for research purposes.

Checking a few specific examples, I found a tropical species that as a result of this now has a few records in northern north america in GBIF. The ID was made and agreed to by two biologists of good repute, who are not specialists in this taxon.

You mentioned that the onus will be on the researcher to clean the data from GBIF: I am trying to imagine how they would do so, at least for any sizeable quantity, aside from simply eliminating all data from iNaturalist or any other sources of similar quality?

Perhaps “research grade” as a term could be replaced with something more accurate.

Thanks for listening.


And we have a winner… :)

If you notice this overzealous and likely naive agreement happening on your observations, I suggest the following steps:

  1. Direct message the user and kindly ask for them to explain their reasoning/evidence/motivations for agreeing with so many IDs (provide some examples of their ID activity that you find problematic). We’ve had cases where people think they are being helpful because they see some of you as “experts” whose observations should be “research grade”.
  2. Block the user (if their activity is high volume and you’re concerned about being able to manage it. if you don’t have the patience or time to compose a polite message, you can skip straight to the block)
  3. Email help@inaturalist.org

The second two steps will help staff keep tabs on this and also reach out to the user if necessary.

I’d also like to suggest that we keep this thread on the topic of how to manage overzealous identifications in the existing system in the absence of a reputation system. We’ve had many, many threads that turn into lengthy discussions of potential reputation systems, so I think it’s most productive to focus on the issue at hand. Short version: please don’t shift this thread to discuss reputation system ideas.



I support such a system. I would award more point for correction as it requires more knowledge to correct an incorrect I’d send to just agree or guess an id.


In terms of gbif for plants at least I find the inat data to actually be better than the other gbif data. Broadly the mapping precision on GBIF is all over the place and it’s really hard to track down and examine a weird gbif record much less fix it.

2 points :

  • if there is a research grade record sent to GBIF from iNat, and it is subsequently corrected on iNat, apparently it is then removed from GBIF on the next synch of the data. Citation from a GBIF employee : https://forum.inaturalist.org/t/observations-of-cultivated-plants-on-gbif/5296/3
  • GBIF accepts many sources of data with far less review of the data than takes place on inat, even stuff like eBird where there is no review and no requirement for any documentation at all.

As a casual user, I sure hope no one is relying on the “research grade” flags, except as a first-pass filter!

Has any consideration been given to creating a taxonomic rank-based reputation system, where an observation is considered to be research grade only after being identified be someone whose own observations in that genus (or family, or class…) have been confirmed by existing experts?

There’s a bit of a chicken-and-egg problem there, but that seems solvable :-)


Your first point is what I was hoping, thank you for clarifying that. If somebody went through and identified thousands of records to research grade that were not identifiable, then hopefully a curator could come along and undo it.

Good second point, I would much rather have vetted data on GBIF from iNat than eBird, at least with a photo I can take a look at it. This is particular to my project on difficult to identify urban parrots. But, the eBird data are somewhat vetted in that untrustworthy observations can be invalidated and not included in any of eBird’s research databases. If the observer consistently uploads suspect data or obviously erroneous sightings then reviewers can block their checklists without the observer even knowing, so the abusers can continue to use the site without polluting the research data with their observations. To be clear, this happens after many attempts to discuss the problem with the user, through direct messages of course :)

I wonder if iNat could do something similar, identify problem users and exclude observations they’ve made or identified from GBIF but not iNaturalist. That way, the primary objective of connecting people to nature isn’t compromised and the data are much cleaner.

As stated higher in the thread curators have no authority to edit or delete (nor should they in my my mind - and i am one of the most active curators on the site) identifications. Curators should be held to the same standards, only do an ID if you can identify it yourself.


I think the prize of Research Grade and the leader boards are just too tempting for many. A few ideas to help reduce people gaming the system or even wanting to.

  1. Can you use a less exciting term? Consensus? Agreement?
  2. Hide any IDs more specific than genus until the 2/3 rule is met. It’ll add a bit of independence to the IDs.
  3. No leader board. Number of IDs is part of your profile but there’s no easy way to see your rank.

I know gamification is part of what can make some sites popular and drives repeat visits but iNat has so much to offer. Maybe Seek is the app for games and ranks and rewards. iNat is its own reward.


You can mute someone; they’ll still ID your stuff, but you apparently won’t get notifications. I haven’t used it, so I’m not sure how well it works. Edit your profile and then “manage your relationships”:

1 Like

That’s been my concern about iNat from the beginning. If you make it too easy and too game-like, I think the quality control can start to suffer. I understand the rationale about making it inviting to a wide range of people, but I think it results in problems like the topic of this thread and brings out the competitive side of many users. Personally I’m not interested in being the top submitter or top identifier – I’m more concerned about occasionally contributing something useful biologically, even if it’s in smaller quantities.

There’s always been this tension between the iNat that is open to non-biologists with widely differing levels of experience and the iNat used by more research-oriented participants. I’m not sure if that will ever be resolved without major changes in the site and I’m not sure if such changes would be desirable.


or the leaderboard could only count actual IDs and ignore the subsequent agreements.


Thank you @cmcheatle - I appreciate this info!

I suspect the core of this issue is the fact that it’s so much easier to add an ID (even without the ‘Agree’ button) than it is to post an observation. Anyone can sit at a keyboard in the comfort of their home and rack up thousands of uninformed IDs in no time with little physical effort; some may just enjoy the game, others may feel as though they’re actually making a useful contribution. Adding an observation on the other hand involves going outside, taking a photo or sound recording, and then loading it. It’s much more work. Just think about the difference in time and effort involved in collecting and posting 100 observations compared to making 100 rapid-fire, uninformed IDs.

One of the ideas on an earlier thread that I think would help (not fix completely I admit, but help) is to have a probationary period for new users before they can add IDs. Let’s say you have to post 50 verifiable observations before you can add IDs for other people. There were arguments earlier that this would discourage new users. I’m not convinced it would discourage most who were serious about the site, but I think it would discourage many of the agree-bot users.

I don’t think this is quite as radical as it sounds. After all, we have a probationary period for adding projects and places—why not for IDs? It would just have to be made clear to new users that there were a few things they couldn’t do until they had submitted 50 verifiable obs, and that this was to try and improve data quality on the site (stressing perhaps that this includes protecting the quality of their own obs, as well as those of others). I don’t think many who were genuinely interested in iNat would turn away.

But, as so often here, we’re speculating when we need an evidence base. Again, we could adopt an experimental approach—put the ID-probation in place for a while and see if there actually was a drop-off in uptake. I’d be very much in favour of trying it. And I’m convinced there’d be far fewer fish-hooks this way than with a reputation system, and it would probably be much simpler to implement…


While I imagine this would reduce the number of blind agreements from duress users, one of the top Coleoptera identifiers is borisb, who has 0 observations. I would hate to lose identifiers like him were requirements made too onerous.


Personally being able to add IDs right away as a new user was one of the things I liked most about the site. Of course my approach to it went “Of course I know what this is” to “Oh wow, there are how many species of this??” The hands-on part is how I learn best so I’m not sure if I would have stuck around very long without it.


That would be poor naming, “Casual Ungraded” would be more apt. People want them to be IDed I think, not some colored pixels. Not sure how that applies to “Needs ID” that most of these would be stuck at, not “Casual”.