Does 'Research Grade' actually mean anything?

Given the defaults involved in submitting observations anyway, It appears to just mean “two people have agreed on species name” (except where flagged as no better ID possible).

It requires no specific data fields other than the defaults for observation uploads anyway (media/location/time) and agreeing with an ID does not provide/confirm options for: agree with media, agree with location, agree with date, for example. This is aside the debate about casual/cultivated items being useful with or without the label.

An expert can upload specific specimens but not reach research grade because any random person hasn’t clicked the Agree button, yet there could be all Annotations and multiple Observation fields completed with detailed notes in the description.

I think either Research grade needs a bit more to it (i.e. completion of the default annotations), a re-label as something else (e.g. as discussed here: or just remove it as it literally adds nothing except ‘someone else clicked the Agree button.’

Per the thread linked above, I think a confidence meter is probably the best representation of the feature, but I think Research Grade creates the wrong incentives and is a poor indication of observational or review quality.

This question was prompted by observing multiple threads complaining about or discussing issues with how people perceive Research Grade, and they clearly highlight that the term is not doing its job as being any useful scientific/data label but instead more of an external validation of input.

Perhaps at a minimum make the community vote for ‘needs more IDs’ clearer or more defined as ‘needs expert input’ or something.


or just remove it as it literally adds nothing except ‘someone else clicked the Agree button.’

This is incorrect. Its a data quality indicator for research database usage

Perhaps at a minimum make the community vote for ‘needs more IDs’ clearer or more defined as ‘needs expert input’ or something.

The point of citizen science is that non-experts are allowed and encouraged to participate. This is also just immensely impractical, unless you want people sending in resumes before they are allowed to identify an observation


Yes, it’s an indicator that someone else clicked the agree button. Which makes it more likely to be correct than if someone else didn’t click the agree button. I’ve always thought that the phrase “Research Grade” gives a false sense of quality or status. I’m keen on the confidence meter idea, simply because it conveys more information. One can still set two IDs as the level of confidence for sending it on to GBIF.

Green for 3+ IDs, yellow for 2, red for 1.

I think distinguishing between ID made by the observer and one other person vs. three people would be a nice feature. Even if it was added on to the existing Research Grade status.


One of the most important rules of building any long-term dataset is that you never change the definitions of your variables part way through. Research Grade is clearly, consistently, and publicly defined. You can argue about whether a different phrase or different definition would have been better, but now that the dataset has been a-building for over a decade, and the definition has been applied almost 75,000,000 times, changing it for any reason would be a truly terrible idea.


Yes, it’s an indicator that someone else clicked the agree button.

Why the general opposition to the idea of community consensus on a website built around the idea of using community consensus for science


I would also add that I know of a few citizen science databases that have expert verification and where I often encounter them is people uploading observations here that they got the ID from an expert and it was wrong. One platform in particular that I did examine had a more than 10% inaccuracy rate overall and for one of the more challenging species jumped up to a rather impressive 40%.

On iNat you get something that isn’t on those other platforms: every observation is up for review all the time. Mis-ID’s only require one person to see them to get fixed. In other databases, there is often no way to even contact someone to get mistakes re-reviewed, let alone able to run through reviewing the entire dataset.


No opposition from me! Just stating the facts. :)


I’m not stating opposition to community consensus. I’m stating opposition to the term Research Grade that has terrible validation controls to achieve its status and it provides misguided incentives for the community using it (i.e. complaining about not getting ‘Research Grade’ for obs as though it’s a certification).

‘Research Grade’ is not a certification, it doesn’t create significant consensus (only requires the original uploader and any other person from anywhere to agree). In other words, I could go to iNat and go through every single ID I can find and just either agree or add some random but ‘look-alike’ species ID and completely destroy the dataset, because there are no other checks beyond hoping others come and review it all.

I get the benefit of Research Grade meaning the data will now be shared with external providers like GBIF, but surely (resource availability permitting!) someone would at least do a cursory review for appropriate taxa before publishing, which could occur whether there’s one ID or 20?

1 Like

On the one hand, many dedicated and knowledgeable people work on iNaturalist. Museum collections with graduate students may actually have less accountability when mistakes exist.

On the other hand, one click on the CV suggestion and one followup agreement is a fairly low standard for inclusion in GBIF.

Also, let’s remember that we’re all here to make this site as successful as possible. Arguing about these topics may lead to actionable changes the staff can implement to improve the legitimacy of the ‘Research Grade’ label, or not and we had fun anyways.


‘Research Grade’ is a highfalutin phrase for sure, but good enough for its purpose here. We could call it George and it would still mean the same thing.


The few times I interacted with non-inaturalist plant observations on GBIF - I always found some completely misidentified herbarium specimens. I actually tried to contact one of the herbariums and tell them they uploaded a misidentified specimen to GBIF, but they never replied nor fixed it. So as far as I can tell inaturalist’s “research grade” observations may actually be the most accurate dataset there is - and if someone ever finds a mistake very easy and immediate to fix. Not saying that we shouldn’t try and improve it further, but there’s just some inherent limitations to what can be done.


In theory sure, but in practice according to an iNat blog post back in 2020 showed that the top 2000 identifiers placed 87% of the ID’s (don’t know what it is now), so there’s not as many identifiers as you might think. And most of us have people that we reach out to for help with ID’s. Personally, I can think of quite a few where 4 or 5 people agreed on the wrong ID and went inactive. So I tagged 8-10 people to help flip it. If you went through placing bad ID’s willy-nilly, that would be annoying, but it would get fixed. Also considering that if you go look at who the top identifiers in many taxons are, you’ll see a lot of real experts in what they’re identifying.


I agree that the phrasing is flawed, but I agree with those saying changing it would be harmful and/or fruitless even more. It’s not a perfect narrative but it’s sufficient and changing it would lead to a lot more confusion than what already exists.

Once you learn what it really means, it does make sense. It doesn’t necessarily mean “this is 100% accurate” but it’s “good enough”, and that’s how they’re treated once they reach RG. So, based on that, while imperfect, the wording is good enough for me and I’d hate to see it changed this late in the game.


Can this subject be merged with this one: (the link that OP already referenced)?

It’s essentially a rehashing of the same thing, and a repeat of the same topic and ‘question’ that’s been raised, discussed, and settled many times in the past.


You could. Begin to. But when iNatters recognise your name for the wrong reason … we fight back with available tools.

I would like the 2 who agree to exclude the observer, making it more objective. But that falls at the first hurdle - for many swathes of biodiversity it is hard to find a second, let alone a third! We each evaluate where the next ‘Research Grade’ falls on our own scale.

PS when I have helpfully IDed with the Placeholder, I withdraw when Research Grade is placeholder and observer. Count me out! We need an impartial second here.


Oh my goodness, yes. I have spent HOURS trying to contact people to fix errors in some of the “expert” databases. Most of the time I’m never able to even get a response.

There’s several dozen plant species that appear on all the botanical checklists for my county because they’re in a certain database and have the incorrect GPS coordinates inputted, or because some expert misidentified something.

The iNat species list is actually much more accurate for my area - there’s always a few random misidentifications in there but they never stay for long.

I think large numbers of non-professionals reviewing things constantly is much more likely to provide accurate results in the end than a small number of professionals whose work never gets checked, and who often have exaggerated ideas about their skill levels.


I’m almost willing to do this, but not quite, since the OP here is questioning the meaning/value of the designation overall, and not merely changing its name.

But that said, I do encourage those who specifically want to support or oppose a name-change to post those thoughts in the other topic, which I re-opened for the purpose. If the naming discussion continues here, we’ll probably end up closing this topic and re-directing to the other one.


Of course it means something, but the name of RG doesn’t matter, it can be called a purple fungus, if you and I know what it means, it’s all that’s needed. RG moves observation from one state to another, changing its place in the system. RG is not poor, there’re mistakes, as well as mistakes in “pro” collections, and those mistakes are easier to change, but majority of RG are correct, it’s easy to miss that if you’re looking only at specific taxa that can get challenging.


I wonder if the some of these concerns reflect an inflated impression of the ‘cleanness’ of academic research. Researchers operate in a hermetic bubble that only excellent data can enter because it’s all passed through peer review and other quality checks, bad data entering damages the research and misleads or inconveniences researchers. Actually research is messy; you’re always dealing with assessments of the quality of the data you’re using and the papers you’re citing. Writings on various taxa often begin with ‘we examined specimens in such and such a museum and found this many misidentified…’ ‘RG’ just means this has reached a point where it is worth offering to researchers. Any researcher using the dataset without personally assessing its accuracy for the particular taxon is simply not doing their everyday job. Then again, as mentioned above, there are plenty of reasons to think the accuracy is pretty good for a great many taxa.


What is The Matrix?