Agreeing with experts and "research grade"

I could see this being pretty discouraging to an eager new expert member who immediately uploads a bunch of accumulated observations in their area of expertise, only to find that their IDs are counting as zero against other random, non-expert IDs that show up on some of their observations. I’m a patient guy, but don’t think I would have the patience to wait 3 months for something like that to resolve.

I do definitely support more restriction on availability of the Agree buttons, however.

9 Likes

Yeah, the first time I suggested this I also added the possibility of curators “releasing” experts, or even non-expert that have shown that they “get it” as far as how CID and the Agree button should be used, but then it gets pretty discouraging to have to repeat yourself… Maybe I should have just put a link to one of the other times I suggested it.

1 Like

Instead of any forced rules I think it would be best to start off with “soft” rules or nudges, like pop-up messages the first couple times a new user clicks “Agree” and renaming or minimizing the importance of Research Grade. If it turns out that people ignore those then I can see justification for implementing something hard.

Although not having the Agree button for 3 months might help to convince users who have already been here for a while, and wouldn’t have as many negative consequences as other suggestions I think.

7 Likes

After looking at all the suggestions here, I would probably favour the implementation of an identifier reputation score. The Research Grade requirement would be set at a threshold, and the cumulative score of all IDers would be required to push an ID to Research Grade. Your reputation for a particular taxon wouldn’t necessarily be visible to anyone (but it could be used to replace the “Top Identifiers” ranking), but it would be based on things like how many IDs you have at that species and/or genus, how many observations, how many “first” IDs, how many times you’ve been wrong, how many times you’ve changed your mind, etc. The threshold would probably be such that an expert and a reasonably competent amateur would be able to take an observation to Research Grade, if there are no opposing IDs. I’d also envision that no one ID should be enough to make an observation research grade by itself, or even with the help of an additional inexperienced “Agree” (ie, threshold is 15, but an expert maxes out with a reputation of 10)

A system like this would likely take some time to implement, so whether it’s worth doing or not would be the biggest question.

Interestingly enough, given that the score would likely be attached to the observation, your reputation score for an earlier observation might be quite a lot lower than for a more current one. I could see a situation where you might be able to “confirm” an earlier ID, and pushing it over the research grade threshold, by replacing your reputation at the time to your current one. This might seem like a bug, but means you can take knowledge you’ve gained about that taxon and reapply it to your earlier IDs.

I do agree that the ID system could be improved, but there may be some consequences of a system based on reputation score that need to be thought through.

My immediate reaction was that if, on a reputation scale of 1-10, I had a score of 1 or 2 for a taxon, I simply wouldn’t bother IDing it any more because my opinion counted for so little. And obviously if I don’t ID that taxon, my score never gets any higher. When we need more IDers rather than fewer, that probably wouldn’t be a good outcome. Yes, in an ideal world low-reputation IDers would plug away at it and gradually increase their reputation (and I’m sure a few would), but we live in the real world and I think a lot of people (particularly newer users) might be discouraged and just throw in the towel.

My other concern is the possibility of manipulating the system. The unfortunate fact is that you may not actually need to know much about a taxon to be among the top identifiers—that’s quite evident already. All you have to do is hit the Agree button often enough (preferably after one or two IDs are in, so there’s less risk of being wrong), and you’re in the top ten/have a high reputation score; this can happen surprisingly quickly in a small country or for a rare taxon. I suspect some of the people who are perhaps more competitive in their IDing will want to have a reputation score of 10 for as many taxa as possible, whether they’re really experts or not.

And as has been noted by several people in this thread, experts do occasionally make mistakes. If that happens, it could be that bit harder to get enough ‘points’ together to negate their incorrect ID and get the obs to RG with the corrected ID.

Food for thought.

10 Likes

I’m still a new user but have made a fair number of IDs where I felt I could justify it. I had wondered how inat judges a contributor’s expertise and pictured some “reputation score” was used. So I’m glad I found this discussion. In my opinion,

  1. Anybody making an ID should know that they are expected to be able to explain their decision. (This is just another way of saying “Please don’t just click the Agree button because somebody else added an ID”.)

  2. I like the idea of keeping the contributor’s ID from counting towards Research Grade because it’s simpler than the other suggestions and avoids the issue stated at the top. This might be a problem for taxa with few experts. Maybe inat could generate a list of experts that are excused from the limitation. (That identifies those experts to the inat public, though.)

  3. A more scientific approach to the issue would be to first analyze existing inat data. For instance, it should be straightforward to determine the effect of requiring three IDs for the Research Grade label. However one would have to create a data set with known correct IDs. I don’t know whether this is a realistic project. But in the end, any scientific use of inat data requires some knowledge of the data quality and this work would help, if done properly. It would have to be a broader goal of inat to find out how to generate the best data quality possible (a never ending goal) and it may be best to identify the kind of questions to be answered by inat data - if scientific use is desired.

4 Likes

My experience has been that the majority of new users come to the realisation that their ID/confirmations carry significant weight, and adjust their practises accordingly. It only really impacts as problematic when a new user makes a large number of errant IDs/confirmations in a short timeframe, or when a large number of new users comes on board at once, such as duress users for assignments or bioblitzes etc. The idea of a probationary period would be to minimise the initial impact, and give greater chances of guidance being applied early on. If an experienced user was helping set up an expert identifier, for instance, the 1:1 tuition would be occurring, and the experienced user could “release” the new expert from probation immediately.

And again, I emphasise that this probationary period would be about whether the ID or confirmation carries weight… They can still make IDs and confirm, but they only count if outside the probationary period (and perhaps the option when releasing from probation to “backdate” to start date if the releaser feels all previous IDs etc have been consistent with the guidelines)

Another way to implement the probationary concept would be similar to how it is here in the forum, where you can only make a certain amount of posts/replies per day until a certain “level” is reached.

@theorickert keep in mind that with CID we eventually get there, but the number correcting IDs is somewhat exponential of the incorrect IDs (to attain the >2/3 majority required), so it is largely about the “wrong ID” gets “confirmed” then “drops out of the identify pool”, reducing the number of potential chances of picking up on the mistake. Getting rid of the ability to “Agree” for new users until they understand the implications would solve a good chunk of this problem!

2 Likes

As a long-time user of iNat (5 years +), I must have seriously misunderstood the term
Research Grade.’ All this time, I thought it was a label applied to the quality of the photo submitted, ie: blurry vs high quality, rather than how many iNat users agreed with your tax ID. Since I put effort into making my photo submissions clearly identifiable whenever possible, they usually get automatically classified as ‘RG’ regardless of who helped ID it or the accuracy of the initial identification.

3 Likes

Out of curiosity, do you also go to the expert’s observations and agree to their identifications if the same conditions apply?

I came across this comment today from a user who has gone through and agreed to every species-level identification from an expert’s recently uploaded observations while not adding a single identification to anything that the expert has left at genus-level or higher. The use of “Google” in the comment is ambiguous as it could imply looking for research papers that describe the species or it could mean looking at pictures under the Images tab and considering that good enough to agree with the initial identification since it’s coming from an expert.

Usually not, but I have considered it. I figured other people might have different opinions on how they want to “agree” with an expert’s ID.

If the experts observations are part of my review filter, then yes! But if they indicate in any way that they are uncertain, then I don’t. Part of the criteria that I subconsciously use to assess “experts” that I will add weight to is whether they have access to or have referred to literature or reliable sources, and whether they have shown any sort of open-ness to the possibility of themselves being wrong!

Thank you for the responses. This particular expert has co-authored a guide to Korean Heteroptera so I assume he has access to relevant literature! How do you feel about having your name become one of the ‘top identifiers’ for the species as a result of trusting/agreeing to an expert’s identification?

In the case linked above, my interpretation is that it’s not a matter of checking a review filter but instead having the observations come up on the dashboard due to following the expert. Even if it is an expert providing the observation, I would feel uncomfortable agreeing to identifications made on other continents just because a couple of Google photos look similar. Should I be agreeing to all of these anyway though?

No, you should not. “Research grade” is an arbitrary standard, and you are under no obligation to hasten somebody else’s observation (or even your own) along to achieve it. Agree if you are confident, don’t agree due to peer pressure.

6 Likes

I agree with schoenitz, if you feel uncomfortable agreeing, then that is your cue that you should not! I sometimes “feel uncomfortable” with an ID that has been made, and would feel equally uncomfortable with challenging it (with a dissenting ID)… Often in such cases I will make a comment, purely to encourage consideration by others (eg “I dunno… The eyes look wrong”)

2 Likes

Honestly, between the behavior I’m seeing on iNat and some of the responses here it sounds like I should lower my standards to be more in line with the rest of the users of the site.

To me this seems really wrong, assuming that that user has no experience with the relevant taxa and is just going with what the expert called them. My understanding is that “Research Grade” means the observation has been independently verified by at least two people, and treating it differently seems to me as cheating. Perhaps I have stronger feelings than others, and to be fair in the end this probably doesn’t make that much of a difference anyway…

4 Likes

Hi all,
This is a great discussion and I sure hope I, as an amateur community science person, haven’t been using the identify section inappropriately :)

As I read this thread, it occurs to me that perhaps there may be a way to add an instruction page/comment that ID-ers need to agree to before proceeding – not unlike usage terms on various sites we frequent. What I envision should be short, maybe even a couple of brief rules each of which needs to “agreed” to before proceeding. Now hang on before getting upset, perhaps this can be done just for the first few times a user goes into the identify pages, and then disappears after one has shown some competence and familiarity. Just a thought that might be a way to teach newer users more about the mission and not be too onerous as to dissuade experts and regulars, whom we need very much. Anything that can instill a sense of welcome as well as seriousness about the consequences of mindlessly agreeing or mistakenly providing ID’s could prove useful imagine.

3 Likes

I like your thinking here. However, I would hope that a mix of obscuring your reputation for a particular taxon, as well as making sure you “level up” to the maximum reputation as soon as possible would make it less onerous to be considered a reliable identifier.

As for your point about manipulating the system, a higher weighting needs to be given to those who first suggest a (eventually correct) ID, and those whose ID rarely gets shown to be wrong. This means as an observer, you are usually the first to identify your observation. If you’re proficient at identifying that organism, you will build up reputation quickly, as it should be.

An example is that I’m no expert, but I can reasonably identify a Koala, or an Orca. I shouldn’t need a lot of IDs to be considered a reliable identifier and max out the reputation score - most of us shouldn’t. It doesn’t need to be about being an expert, it’s about being reliable. Sometimes that means being an expert, or even a world-class expert (ie, if you’re IDing most sea sponges). Sometimes it just means having a basic awareness that you’re not occupying the planet by yourself (ie, for Domestic Dogs, or Humans)

My understanding of ‘Research Grade’ is the same but here in the thread we have curators saying they treat identifying differently, which is somewhat confusing since I typically view them as ‘expert site users’.

Related to this issue, I have seen users with published research blindly agreeing with identifiers on observations outside their area of expertise: Suborder (Agree), disagreement/ different Suborder (Agree), Family (Agree), disagreement/ different Family (Agree), Genus (Agree), Species (Agree). With a pattern like that it’s hard to know that the user independently verified that the identification is correct. And if scientists and curators are blindly trusting other users should regular users be told to use the site differently?

Another issue that this behavior brings up is under the ‘Top Identifiers’ tab. At the moment, the user that agreed to the identification is listed first in the list while the expert is second. (It looks like the ranking is alphabetical.) If I also assume the expert is right and agree to his identification the expert drops to third. One could argue that the ranking is meaningless, but it is what users check when asking for help and it would certainly help if users confident/ capable of making identifications appeared at/near the top of that list.

Keep in mind that a photo is only one part of the observation. If you can positively identify an organism, even from a poor-quality photo, it contributes an awful lot of information. I see Research Grade as an indicator of the quality of the observation, not the photo. All of a sudden, you know that an individual of a certain species was observed at a particular date, in a particular place, even if you don’t know what it looks like.

Of course a quality image helps facilitate a positive ID, and may also provide other information (appearance, behaviour, etc) which makes the observation useful.

1 Like