Agreeing with experts and "research grade"

Hmm.

I still wouldn’t agree with someone just because they’re an expert. In the case you gave here, when the person ID’ed it, I would use it as an opportunity for me to learn how to do the ID properly, and then if I felt able to do so, I would agree.

Otherwise, I would say: “Not today.” And I don’t think this is crazy.

When I click “agree” I feel that it is making a claim that I know how to identify the organism, a claim about my own knowledge. If I truly do not know how to do it and the only reason is that I trust another person’s authority, then really, me clicking “agree” is misleading. The field was not designed as a testimony to the other person’s authority.

If someone truly is an expert like this, why do we even need multiple ID’s? And why would their ID be able to be “overriden” if a bunch of other users came in and “agreed” on some other ID?

Yes, it does. But I think that the bigger problem on iNaturalist is by far things incorrectly ID’ed, marked as “research grade”. As an example that came up the other day, I found quite a few plants mis-identified as Pokeweed, far outside Pokeweed’s native range. But I also found a few plants that I identified as Pokeweed, outside its native range. iNaturalist provides a way of tracking things like the expansion of a species…but what good is it if it’s bad data?

I think the data would be far more useful if people erred on the side of caution in ID’s.

4 Likes
  1. In the short term, it may be as good as an ID you’ll get, but over time other experts will find your observations and others will become educated on how to ID the group in question. Physical specimens in herbariums and insect collections can sit for years and years before being sorted and identified by relevant experts. It would be great to have way more experts on iNaturalist, but quickly agreeing with the few experts that are on iNaturalist does not solve that problem.

  2. If an expert is really the only person who can ID something and that expert isn’t publishing or sharing objective diagnostic criteria that allows others to make identifications then their expertise means nothing. Science is about sharing information.

  1. Exactly. If an expert IDs your observation, that’s great, you have your observation ID’d to your satisfaction. At this point, ask yourself what you gain by clicking “agree”. You’ve gotten your answer from the expert and it’s been recorded. I trust the experts too! I just don’t click agree unless I review some keys or something to at least rule out similar species.

  2. We need better guides and keys and these need to be shared more broadly. It would be cool if this could be built into or linked from iNaturalist taxon pages: https://forum.inaturalist.org/t/add-an-interactive-system-to-glean-diagnostic-features-from-identifiers-and-show-them-to-observers/3225

5 Likes

I’m not an expert by any means, but I will say this much. Sometimes, even experts in the field make mistakes. I’ve corrected a number of them on iNat, BugGuide, and other entomological websites that have species of planthoppers.

4 Likes

Yep, trust, but verify.

4 Likes

In addition to the fact that experts can make mistakes, many (most?) of the people doing lots of identifying on iNaturalist are interested amateurs, not experts. Lots of them are in the process of learning how to identify the subjects better, which means they’re making some mistakes. This is super helpful for learning, but only if the mistakes get caught, which is less likely if the observer just agrees with the ID and makes the observation RG.

11 Likes

This seems to be converging with another recent thread (https://forum.inaturalist.org/t/issue-with-users-automatically-agreeing-to-an-identification/2987/48).

I’m fairly new to iNat, but I’ve been trying to ID a fair bit in my area of expertise. I have to say it does irritate me when an observer, who clearly had no idea about an ID, quickly agrees with me at species or subspecies level. A number of people in this thread have explained how they do their own checking before agreeing with someone else’s ID of their observation. I do the same, but I really think we’re in a minority. Simply looking at the speed with which many observers agree with my ID tells me they can’t possibly have had time to do any checking themselves. As we all know, the end result is that we end up with a lot of RG obs that have only one ‘reliable’ ID (and if that’s wrong…). That potentially degrades the dataset, and in effect defeats the purpose of the requirement for two IDs to reach RG.

In the other thread, requiring 3 IDs for RG was suggested as a remedy, but didn’t seem to be favoured by many (it wouldn’t be a problem for bird IDs for example, but for groups with few experts it probably would). Removing the Agree button for new users was also suggested, but I don’t think that would solve the problem—it’s very quick to type in enough of the suggested ID to make it come up.

I suggested stopping an observer who had entered an initial ID from changing that ID later (for that obs only), effectively preventing them from agreeing blindly with the next ID provided (see the other thread for a few more details). That didn’t go down well with the frequent users who, it seems, routinely upload obs with no ID or high-level IDs, and come back later to ID them.

This seems to be another argument for the ability to load obs in ‘draft mode’, then ID/edit/add to them if you want to before they go live. If that could happen, stopping people from changing their initial ID could be implemented, and the frequent users could still change their own IDs in draft mode. I suspect many of us would like the dataset to be more robust, and perhaps this would help. And who knows, realising that they can’t change an ID later may even make some observers spend a little more time on deciding their initial ID.

Does this mean that if I photographed a random beetle, submitted it as “Beetles”, and then months later was bored and decided to research what it was and figured out the genus, I wouldn’t be able to update the identification on my own?

6 Likes

sometimes i think an observer’s own ID should be noted but not counted in the Community ID calculation. i think that might address different things that people in the Forum say irritate them.

4 Likes

Yes, it does. First, you might hope that “months later” someone else had already provided an ID. Second, I suspect that any measure like this is likely to have some downsides. For me, this one would be more than compensated for by the reduction in ‘automatic’ agreements.

1 Like

Which sounds similar to the idea mentioned above of requiring 3 IDs for RG, where one can be the observer’s ID.

1 Like

no, i’m not advocating 3 IDs for RG. that would not be the same as what i was talking about.

1 Like

I understand the distinction, which is why I used the word ‘similar’. The practical effect of removing an observer’s ID from the Community ID is to require 2 additional IDs, which is similar in outcome to allowing the observer’s ID and requiring a total of 3 IDs. In fact, I’d probably be OK with either of these approaches, or any others that have the same effect.

I will only agree with expert IDs if I can justify it, generally. If an ID is posted of some weird obscure species I can’t even find photos of, I’d rather look for literature to support confirming it. That said, when I know the identifier has very good expertise in a subject, I may be inclined to just go along with it.

3 Likes

This is what I call “adding weight” to an identifiers ID. I will only do it if I have developed a degree of confidence in their IDs, and as long as there is no contention over the ID. I am “around” to alter my ID (or even delete it) if the need develops.

I started in iNat just like everyone else, confirming the IDs of experts, a) because it seemed polite, and b) I was keen to get obs to RG. Nowadays I am far less concerned about whether they get to RG or not, and I am far less concerned about “fixing” wrong IDs from others. I think I have come to understand the concept of “Community ID”, but it took time! Hence I think 3 months of not having an Agree button (anywhere), and/or not having IDs made by new accounts (whether time or # obs/IDs based) count toward CID would help tremendously

I could see this being pretty discouraging to an eager new expert member who immediately uploads a bunch of accumulated observations in their area of expertise, only to find that their IDs are counting as zero against other random, non-expert IDs that show up on some of their observations. I’m a patient guy, but don’t think I would have the patience to wait 3 months for something like that to resolve.

I do definitely support more restriction on availability of the Agree buttons, however.

9 Likes

Yeah, the first time I suggested this I also added the possibility of curators “releasing” experts, or even non-expert that have shown that they “get it” as far as how CID and the Agree button should be used, but then it gets pretty discouraging to have to repeat yourself… Maybe I should have just put a link to one of the other times I suggested it.

1 Like

Instead of any forced rules I think it would be best to start off with “soft” rules or nudges, like pop-up messages the first couple times a new user clicks “Agree” and renaming or minimizing the importance of Research Grade. If it turns out that people ignore those then I can see justification for implementing something hard.

Although not having the Agree button for 3 months might help to convince users who have already been here for a while, and wouldn’t have as many negative consequences as other suggestions I think.

7 Likes

After looking at all the suggestions here, I would probably favour the implementation of an identifier reputation score. The Research Grade requirement would be set at a threshold, and the cumulative score of all IDers would be required to push an ID to Research Grade. Your reputation for a particular taxon wouldn’t necessarily be visible to anyone (but it could be used to replace the “Top Identifiers” ranking), but it would be based on things like how many IDs you have at that species and/or genus, how many observations, how many “first” IDs, how many times you’ve been wrong, how many times you’ve changed your mind, etc. The threshold would probably be such that an expert and a reasonably competent amateur would be able to take an observation to Research Grade, if there are no opposing IDs. I’d also envision that no one ID should be enough to make an observation research grade by itself, or even with the help of an additional inexperienced “Agree” (ie, threshold is 15, but an expert maxes out with a reputation of 10)

A system like this would likely take some time to implement, so whether it’s worth doing or not would be the biggest question.

Interestingly enough, given that the score would likely be attached to the observation, your reputation score for an earlier observation might be quite a lot lower than for a more current one. I could see a situation where you might be able to “confirm” an earlier ID, and pushing it over the research grade threshold, by replacing your reputation at the time to your current one. This might seem like a bug, but means you can take knowledge you’ve gained about that taxon and reapply it to your earlier IDs.

I do agree that the ID system could be improved, but there may be some consequences of a system based on reputation score that need to be thought through.

My immediate reaction was that if, on a reputation scale of 1-10, I had a score of 1 or 2 for a taxon, I simply wouldn’t bother IDing it any more because my opinion counted for so little. And obviously if I don’t ID that taxon, my score never gets any higher. When we need more IDers rather than fewer, that probably wouldn’t be a good outcome. Yes, in an ideal world low-reputation IDers would plug away at it and gradually increase their reputation (and I’m sure a few would), but we live in the real world and I think a lot of people (particularly newer users) might be discouraged and just throw in the towel.

My other concern is the possibility of manipulating the system. The unfortunate fact is that you may not actually need to know much about a taxon to be among the top identifiers—that’s quite evident already. All you have to do is hit the Agree button often enough (preferably after one or two IDs are in, so there’s less risk of being wrong), and you’re in the top ten/have a high reputation score; this can happen surprisingly quickly in a small country or for a rare taxon. I suspect some of the people who are perhaps more competitive in their IDing will want to have a reputation score of 10 for as many taxa as possible, whether they’re really experts or not.

And as has been noted by several people in this thread, experts do occasionally make mistakes. If that happens, it could be that bit harder to get enough ‘points’ together to negate their incorrect ID and get the obs to RG with the corrected ID.

Food for thought.

10 Likes

I’m still a new user but have made a fair number of IDs where I felt I could justify it. I had wondered how inat judges a contributor’s expertise and pictured some “reputation score” was used. So I’m glad I found this discussion. In my opinion,

  1. Anybody making an ID should know that they are expected to be able to explain their decision. (This is just another way of saying “Please don’t just click the Agree button because somebody else added an ID”.)

  2. I like the idea of keeping the contributor’s ID from counting towards Research Grade because it’s simpler than the other suggestions and avoids the issue stated at the top. This might be a problem for taxa with few experts. Maybe inat could generate a list of experts that are excused from the limitation. (That identifies those experts to the inat public, though.)

  3. A more scientific approach to the issue would be to first analyze existing inat data. For instance, it should be straightforward to determine the effect of requiring three IDs for the Research Grade label. However one would have to create a data set with known correct IDs. I don’t know whether this is a realistic project. But in the end, any scientific use of inat data requires some knowledge of the data quality and this work would help, if done properly. It would have to be a broader goal of inat to find out how to generate the best data quality possible (a never ending goal) and it may be best to identify the kind of questions to be answered by inat data - if scientific use is desired.

4 Likes