Agreeing with experts and "research grade"

Hi all,

Sorry, I’m sure this has been discussed extensively somewhere, but I’d like to get input on this question for myself. If I post an insect photo from an obscure family/taxa and I see that it has been identified to species by a recognized national or international expert on that family, is it “OK” to agree with the expert in order to make it “research grade”, even if I couldn’t have identified it myself to that level? Given the dearth of experts on iNat and in general, isn’t it a lot to ask to have two separate experts visit my observation and agree with each other? I am very cautious about agreeing with random folks or people with anonymous profiles/names, but this would seem to be a different situation…



Identifications should reflect your expertise. That said, it’s a pretty common practice.


I often think of it as “peer review” as for reviewed scientific literature. It wouldn’t be good practice to have just 1 outside expert review the work before being published, nor to have just 1 expert plus an inexpert adding a “sure, if you say so.” If it takes longer to “get published”, so be it.


(Sorry for the low-effort post, but I think the linked post covers everything I want to say here, and I don’t have enough time to write a new version right now.)


I usually take an approach that’s somewhere in the middle for taxa I don’t know well. If I have at least some experience identifying that group and some kind of identification resource, I might agree if I can convince myself “OK, I can see why this is X and not Y”. If I literally have no idea beyond order, I typically wait for a third party to confirm.


This is a great question. And while I’m sure it has been discussed before it merits more discussion.

I would wager that there are a significant number of research grade observations similar to what you describe. The issue here is a tradeoff between speed and accuracy. On the side of speed, one could argue that developing a body of RG observations quickly is useful because those RG observations then become an important source of information for other users who can use them to become good identifiers.

On the side of accuracy, I am certain that experts make mistakes. And what good are RG observations if they include errors? If they are honest with themselves, I’m sure experts would welcome having other observers provide careful second opinions about the identity of observations. The entire scientific review process is built on the idea that truths are objective and revealed through careful consideration of the evidence–in fact many papers are now reviewed blind, i.e. with their authors’ names obscured so that the reviewers aren’t unduly influenced by their reputations.

This topic raises some bigger questions about how iNaturalist challenges the historical paradigm of taxonomic expertise. Science is a collective project that requires a community of practitioners who share a set of criteria and who communicate often. Science cannot be done in isolation. However, professional taxonomists have generally specialized on a narrow range of organisms, and shared their work in niche publications that are not always widely read and that are often difficult for others to find–even if you have access to a world class university library. Specialization is obviously a necessary aspect of taxonomy, but the risk is that if everyone has a narrow specialization this reduces the number of other people that can review each others work. And, as I keep repeating myself (sorry!) that’s key to doing science. In fact, when two experts work on the same taxa, they quite often end up getting into big arguments! (Ask anyone who knows taxonomists).

I’m excited about iNaturalist because it opens up this process to a much wider range of people from the beginning. Experts have an obligation within this framework to justify their identifications with objective criteria, just like everyone else. While we can learn a LOT from experts–I don’t want to discount that (I LOVE EXPERTS don’t get me wrong!)–learning from experts is different from simply deferring to their authority without critically reviewing the evidence. Experts can sometimes learn a little from us non-experts too!


I think it’s definitely situational, and can be somewhat dependent on how many experts are reviewing a particular taxa. For leafhoppers and planthoppers for example, there are a few of us regularly identifying (and that includes making sure the research grade observations really belong where they’re situated), so we sort of keep each other in check. For psyllids though, this sort of peer-review checks and balances thing isn’t in place because I’m essentially the only person identifying them. Because of this, I choose to often give ID explanations especially on the more difficult taxa. I let the observer know what is important for the ID, and if the observer agrees that what I’ve outlined applies to their observation, they are free to agree (or disagree! no expert is infallible, especially when it comes to photo identification). Likewise, experts that give me an explanation or reference backing up their IDs to my observations I am more likely to attempt to check and agree with if I think it’s appropriate, as I do love to learn about the things I find. Otherwise, I’ll leave it to the experts.


Personally, no, I never do that. My identifications represent my own opinions, and I think blindly agreeing with experts just amplifies problems when the experts make mistakes. I don’t know why people get so hung up on making things “Research Grade.” I wish we’d just called it “Arbitrary Green Label.”

My only caveat is that I do consider an expert identification to be one of several sources of information I might consider in making my own identification. E.g. if Mark Egger (a Castilleja expert) says something is Castilleja applegatei and after consulting Jepson and Calflora I’ve narrowed it down to C. applegatei and C. affinis and I’m just not sure how glandular the hairs have to be to make it C. applegatei, the fact that Mark ID’d it that ways suggests to me that the degree of glandularness in this particular photo is glandular enough for that species. Though frankly I’d probably still ask Mark to confirm that.


I agree with @kueda. I want my IDs to represent my own knowledge/experience and not blind agreement. I will agree in situations where I IDed to genus conservatively but had a strong guess as to species and an expert confirms that or where I had my ID down to two species and one was confirmed.

One other scenario that I will “agree” in is when an expert points out features that support the ID that I can confirm myself (like, “this individual has a white eyering, indicating it is species X”.

Otherwise, agreeing without any specific knowledge just to get an observation to “RG” doesn’t seem to be of long-term benefit.


On the other hand, people can blindly agree with obs on the Identify page. It is not possible to see whether the identifier is an expert or whether the identifier provided explanatory notes. Yet it is one click for what is a very superficial agreement.


Have you seen

As someone who IDs a lot of taxa that no other expert might ever see, I always appreciate when the observer adds an agreement to make the ID research grade. Mostly, my goal is to add training data for the auto ID tool, which requires a lot of RG observations. The tool often performs quite poorly in invertebrates, because they are diverse and there’s usually a serious lack of taxon experts on iNat. Sure, there will be mistakes, but IMO “blind” agreements makes the whole process go a lot faster.


If someone provides an ID for me in a group of animals I’m not very familiar with, I’ll often end up agreeing with it. But before doing so, I’ll do enough research to make sure the ID seems at least reasonably likely, typically by Googling the suggestion to learn more about the appearance of that species. Often I will ask the person who provided the ID which details of the photograph made them decide on that ID. I’ll definitely consider how much of an expert the person seems to be as one of my criteria for agreement, but I won’t just blindly agree.


The other angle here is at the current time if the expert deletes their account and no one else has added the ID it just vanishes and you lose the ID. Not just gets crossed off but actually vanished. This has happened at least once and means I’m more likely to agree with the expert if the ID looks reasonable. That’s my drive more than research grade.

Relatedly, I am teaching a coworker/friend plants in the field and they are adding them to inat and I am sometimes unsure whether it’s double dipping to go and agree with his IDs if I am the primary source. But he already knows lots of plants too so it’s a kinda odd grey area from the other side.


Does everything a user did get deleted if they delete their account - including their observations?

1 Like

Observations, comments, and IDs are deleted. Some other things, like taxa they may have created, or flags they made, aren’t. More discussion about that here:

A post was merged into an existing topic: Dealing with Account Deletion

I feel strongly that if someone doesn’t know how to definitively ID a plant, that I want them to refrain from identifying things.

Now, there are times when I see a post and it’s a species that I don’t know how to ID to species level, but perhaps I could ID it to family or genus…and then I see the previous user’s ID and I’m able to then look up all the possible options, and how to ID them, and I can verify that the ID is correct, and in the process, learn how to ID that species. (For example, if it’s a plant, I might check BONAP’s range maps and the Plants of Pennsylvania book, which I own, and go over all possible species and read their descriptions.)

Or…in some cases the species is very distinctive and not easily confused with anything, but I just wasn’t familiar with it, and when I see the user’s ID, I’m able to consult a few sources and verify that there is enough for me to independently ID it, and now I’ve learned how to ID a new species, and in this case I will then agree with the identification.

If people are doing this sort of thing, that’s great. Not only are they contributing to iNaturalist, but they’re becoming better at ID and learning new species or other taxa!

But if they aren’t doing this work, I’d rather they just sit back and let other people do the IDing. If they have a hunch or a guess, but are not sure, they can share this verbally in the comments.


As someone who makes observations and identifications, as well as actually using the data (at a non-professional level), I’ve seen the different ways this affects records.

As an identifier, I’m not always going to be 100% certain when I make an ID, especially if my ID isn’t going to make the record Research Grade (I apply a higher standard when my ID is the deciding one). As a rule, I’ll either be uncertain, and say so in the comment, or confident, or certain. Unless I’m certain, it’s a bit of a concern when the observer (seemingly blindly) agrees with my ID (especially when their previous ID was Class level, or spectacularly wrong). If I am certain, I will occasionally provide links and descriptions to help the observer confirm my ID more thoroughly, and I think this helps when there are only a few identifiers for that taxon (it also gives other identifiers a frame of reference for when they disagree, or if I disagree with them).

As an observer, I will often have a more specific taxon in mind when I ID my own observation. If someone I know to be an expert confirms my thoughts, then I will agree. Otherwise I will leave my initial ID as-is, or hit up external sources and confirm that way. I don’t agree unless I have managed to convince myself, one way or another.

Finally, as someone who uses the data (in an AI-based side project), and knowing how Research Grade works, I always run my eye over the data that goes in, because I know the system often produces misidentifications, even at research grade. It would be nice to not have to do this, but I can see that improving the system will be difficult.

So how can it be improved? You could try increasing the requirements for Research Grade, but that will cause issues when there are few experts for a taxon. One thing I would suggest is adding a confirmation for when an observer tries to agree, with a short blurb on when it is appropriate (ie, “to reduce the chance of misidentification, please confirm you have taken steps to validate this ID” - Yes/Cancel). It makes it less streamlined, but I’d argue this is one situation where that’s not a bad thing.

1 Like

Thanks everyone, this is very helpful! I really appreciate the discussion. A couple of comments:

I can’t help but think that those folks who never “agree” with an experts ID unless they could make the ID themselves (to paraphrase anyway) are being a bit too purist for me. After all, I’m posting the observation to try to identify the bug, specifically hoping that someone with expertise will help me identify it, and then when a internationally known expert who has seen thousands of these critters and written the key papers on how to ID them weighs in, I shouldn’t “agree” with them??? Isn’t that sort of crazy? :) Yes, experts can be wrong, of course, but it’s pretty much as good an ID as I will ever get, unless I collected the critter and sent it to them. And there are unfortunately a number of obscure groups of insects where there may really be only one or two aging experts, so getting two folks may be too much to ask…

@kueda says that “research grade” is an arbitrary designation/name, but doesn’t “research grade” have consequences for the observation (i.e. sent over to GBIF and other databases) versus non-research grade? So it’s not just a funny name, but means something in terms of what happens to the data, right? Or am I missing something there?

Thanks all,