Overzealous Identification

What if the rule was changed to require 3 people to agree for RG?

13 Likes

This wouldn’t solve the problem entirely but I think it would help a lot. Some have expressed concern that this would create a huge bolus of suddenly NI obs, but I think it could be applied as of a given date. In particular it would prevent an observation that receives one ID that is confirmed by the observer from becoming RG.

4 Likes

I know of a user who only confirms research grade observations and doesn’t id non research grade observations. I noted the person has confirmed a few mis-id research grade observations and he is in the top 3 identifier position :-(

1 Like

Recently I’ve had a user acting like an agree-bot running through hundreds of my observations with a single species ID in the needsID category, and not once have they corrected a bad ID or improved an ID. When I correct a bad ID (the first one, mine), there is no evidence that they return. I now have hundreds of observations that need a confirming ID but got an agree-bot ID instead, and so hundreds of observations have a low-quality RG status. I would like a better chance for my IDs to be looked at and find misentries and misidentifications, but that chance was reduced by the user, since the Identify tool search filters excludes RG observations by default.

Is there a way to search for my observations that have been identified by a particular user and automatedly check the “can the community ID be improved = yes” option?

7 Likes

I would say keep messaging this person/ these people. Stay polite, but get a little firmer and more explicit each time. It’s important for them to know that iNat should not be mis-used.

I think a lot of people don’t understand that the data on iNat is often used by scientists, and therefore needs to be as accurate as possible. Some people think that iNat is just a game.

4 Likes

I think the leaderboards are the issue. Maybe a pipe-dream, but it would be good to have a way to identify the experts for a taxon without encouraging this type of behaviour. In fact, I’ve found that many of the most knowledgeable people only have a few IDs, because they are usually only called in as pinch-hitters to ID the really tough or interesting ones, while some of the top identifiers have only ever agreed with others.

15 Likes

If this continues to be an issue with a single, unresponsive user, you might consider the “block” feature. I’m not sure this is one of the intended use cases for it, but I see that behavior as essentially spamming your personal records with bad IDs and justifiable.

4 Likes

What about restricting users use of the Agree button until they have uploaded and correctly identified x number of observations of their own. You can’t post to this forum without passing its tests. I could see something similar before users were allowed to use the Agree button. I know this would be some work to set up but too many misidentified RG observations turn the data to junk.

3 Likes

I find it irritating when I go through Unknown observations and add Genus/Species level IDs and the observer just instantly agrees with me without any question. I’m not an expert by any stretch of the imagination, I’m here to learn and then use that knowledge to help others. I’m wrong a lot of the time! When I have an observation I posted IDed I usually ask why or what to look for so I can know for the next time. I also ask questions if I’ve suggested an ID on someone else’s obs and have been corrected.

Maybe we need more emphasis on ID suggestions being suggestions and not written in stone?

3 Likes

A couple issues with this approach, primarily how to you decide what is “correctly” identified (given that it’s a community ID that is never set in stone)?

The second is that these some of these users are prolific at both observing and identifying, so a volume metric wouldn’t exclude them. That volume is part of the issue - it’s not a big deal if a sporadic user adds a few wrong IDs here or there, but it can be a real issue if an overzealous identifier is adding 100s or 1000s of unsupported or even incorrect IDs.

2 Likes

I think this is at the heart of this issue. Many of us see IDs that we think are wrong, but we have to remember that they are putting the ID that THEY think it is. For many iNatters, experience and knowledge are something yet to be attained, so they are likely to be wrong at least sometimes. If you are talking about them “agreeing” with an already applied ID, then it follows that they are not the ONLY ones mis-IDing. The overzealous identifiers are not the problem here…

One thing that is problematic is the Agree button itself. It is too often used as a sort of “thankyou” back to the identifier. A pseudo acknowledgement for being given the ID on your observation. It has been suggested before about having a probationary period similar to the discourse forums, as well as about having more on-boarding direction around how to use the Agree button, but personally I think it should be scrapped. It’s not hard to “Agree” by typing in the first three letters of each part of the binomial…

Another problem is the over emphasis on RG, which this topic is evidence of. Many “over zealous” identifiers are thinking they are doing good by getting things to RG. It is not the objective of the site to get observations to RG… that is a quality that some observations will get to, but many shouldn’t. The objective of the site is to promote and encourage people to connect to and value the natural world, which when you think about it, is being evidenced as effective by the over-zealous identifiers and the passionate ID correctors!

By far the biggest problem is the ineffective communication. I see mentioned that users are being messaged and are not responding… that to me is probably because they didn’t get the message. If you make a comment to an iNatter that identifies a lot, how are they going to see it amongst the thousands of alerts they get on a daily basis? Direct message is definitely the way to go. If they continue to be unresponsive over challenges on what looks like inappropriate activity, then message help@inaturalist.org to look at taking it further.

11 Likes

The idea of 3 IDs for RG was discussed at length in
https://forum.inaturalist.org/t/agreeing-with-experts-and-research-grade/3718/43 and
https://forum.inaturalist.org/t/issue-with-users-automatically-agreeing-to-an-identification/2987/36

For groups where there are lots of identifiers (such as birds) it wouldn’t be a problem. But in some other groups, getting two identifications is already hard enough, and requiring three may mean more observations failing to get to RG.

Personally I really wish there was some way of preventing RG from being reached by just one reliable ID, whether that happens via the “one ID plus subsequent agreement from the observer” scenario, or the “one ID plus agree-bot”. But as is obvious from those other two threads, there seems to be no simple way to achieve that without some downside.

Just as a thought for discussion, how about a trial period (6 months??) of requiring 3 IDs for RG, and then look at whether it really does result in a significant fall in the number of obs reaching RG. If it does, we could go back to 2 IDs and think again. Or perhaps go to 3 IDs for unaffected groups and 2 for the harder ones. The experimental approach.

4 Likes

I think I’ve mentioned this before, but I’d replace the ID leaderboards with a reputation system. Reputation “points” are collected, but they’re not visible. However, you are listed as an expert for that taxon if you are in the top 10. Only certain actions would accumulate reputation points, such as:

  • Every 10 RG observations you make for a taxon (at all levels - ie, if I make 10 observations of Orcas, I earn reputation points for Orcinus orca, all the way up to “Life”)
  • Every “first” identification, where you (correctly) improve an ID
  • Every correction (assuming the community ID remains as that taxon or a descendant)
  • Every time you are tagged in an observation
  • You lose points for an ID that needs to be corrected - even if you later agree with the correction

These actions would not be advertised either, to help prevent gaming the system. For the most part though, It wouldn’t matter because accumulating points in most of these situations requires that you have the expertise anyway.

13 Likes

I like the idea in principle. I think its better to be upfront about how the “reputation” is calculated, some people will try to work it out and game the system whether or not its clear. Of these ideas for points, I suggest ‘every correction’ is most likely to have a perverse outcome.

2 Likes

Rather than “block” users, I’d like to be able to “mute” them. That is, I could click something to make their ID’s not appear.

There are some leaderboard fanatics who seem reasonably conscientious, and I don’t mind them. But I woke up today to find 51 observations with “IDs” that just confirmed earlier work — including, for example, this one: https://www.inaturalist.org/observations/20017054

ID’ing a rare South American animal by a single footprint is pretty advanced work. I wish I could “mute” this kid without being a jerk.

Also +10 to the idea of a background, algorithmic “reputation” system. I would add that people could get reputation from having their ID’s added to “favorites” or any number of other metrics. Main thing is that the system would reward good-faith participation and not mere clicking.

4 Likes

How about renaming “Casual” to “Casual Grade” and changing it from grey to another color? It could just be a psychological thing, people want the nice shiny green RG badge of honor. I mean… If I’m being honest when I started on this site I felt the same way, that RG should be the goal. It wasn’t until I spent a significant amount of time here that I realized RG shouldn’t be and in many cases cannot be the goal.

That change might at least satisfy the part of the brain that hungers for badges and achievements?

7 Likes

I’ve been pretty direct in the past (for those newer to the discussions) about not being comfortable with much of the concept of a reputation system.

In particular I don’t like any solution that solely rewards 1st identifications. This is not some small local site with a couple of dozen or even a couple of hundred users. It is a global site with tens, if not hundreds of thousands of active users. Turning it into a race to be the first one to ID something seems counterproductive to me.

Many of the best identifiers on the site do few if any observations themselves, punishing them for that seems inappropriate, rewarding the number of RG observations also will simply lead to people flooding the site with records which run the risk of corrupting the data (hey I saw a flock of a hundred Mallards, so I’ll just add photos of all of them, since that is the only way I can become accepted as a Mallard expert).

I strongly favour the consensus approach the site has designed, not some top down approach.

9 Likes

I would like to see the ranking of an IDer, how many IDs has been confirmed by others per taxon and per country, Or the kind of Grade has a lot of levels till 100% depending on the skills.

On observado.org of the 50.000 users only 25-40 are allowed to confirm determinations…which is, i think , a bit too less.

I was not aware until now that GBIF accepted “research grade ID” data from iNaturalist, which for me mostly eliminates GBIF as one of my sources for geographic occurrence.

I have read the arguments in this and other threads as to why “research grade” = 2 agreeing IDs. I can see the challenge between iNaturalist’s excellent main purpose of getting folks involved in nature, and the secondary purpose of data gathering. But two random people agreeing on an ID for all the various reasons they might do so, does not create data that is usable for research purposes.

Checking a few specific examples, I found a tropical species that as a result of this now has a few records in northern north america in GBIF. The ID was made and agreed to by two biologists of good repute, who are not specialists in this taxon.

You mentioned that the onus will be on the researcher to clean the data from GBIF: I am trying to imagine how they would do so, at least for any sizeable quantity, aside from simply eliminating all data from iNaturalist or any other sources of similar quality?

Perhaps “research grade” as a term could be replaced with something more accurate.

Thanks for listening.

3 Likes

And we have a winner… :)