Showing How Much Attention an Observation Has Received

Even with the qualifier, I have a problem with this emphasis on getting observations out of “Needs ID”. It encourages non-experts to make WAG’s because “hey, anything to get the observation out of needs_Id, right?”

Clearly, we’re never going to clear out the needs ID backlog - in fact, we can’t even hold the line. Perhaps we should figure out a better way to manage the backlog so that identifiers “waste” less time on observations that (in all probability) cannot be identified to species. Opinions seem to differ on the appropriateness of using the “as good as it gets” option to force observations to RG at a higher taxonomic level. My suggestion of the status/queue feature would help an identifier keep chronic needs_id observations out of their personal queue, but that won’t help with keeping random folks from wasting time puzzling over something that likely needs expert attention (if it can be ID’d by anyone), or worse, making it RG when it shouldn’t.

Maybe we need some kind of flag that says “don’t bother looking at this one unless you’re a taxon expert”. Maybe it could be something that folks could upvote. Imagine a scenario where a number of experienced users are discussing an ID and can’t come to a consensus. Currently, you could have a well-intentioned (but naive) user adding a species level ID without reading the discussion. All they see is an observation that is still in the Needs_ID state, and from what they’ve seen in this forum, that’s a huge problem. Folks who do this are probably doing it to a lot of observations, so they probably aren’t going to notice notifications that their ID is either incorrect or unjustified. Even if you discount the existence of such naive users, wouldn’t it be great if we could prevent the non-expert from “wasting” their time reading a long discussion when in all likelihood, they’ll decide they don’t have the expertise to contribute an ID?

Now, rewind things a bit. Imagine that during the original discussion, the knowledgable users could vote on some kind of an indicator that says “this is a tough one - don’t add an ID unless you know what you’re doing”. This might prevent both a blunder by the naive user, as well as save the time of the non-expert. Indeed, it could provide a target where the time constrained expert could focus their efforts.

I guess my point is that we shouldn’t get so hung up on leaving observations in the Needs_ID state. We should be accepting that perhaps there are observations that should STAY in that state. From what I’ve read, many feel that forcing higher level IDs to RG is the wrong approach. At minimum, it may prevent those observations from getting the attention of experts. Perhaps we need an intermediate state between Needs_ID and RG. Something like “needs_expert_ID”, which could be upvoted.

6 Likes

I take heart from the statistics that show that the percent of Research Grade observations has hovered around 62-64% for at least five years now. To me, that means we identifiers are holding the line.

Also, we need to remember that observations that can never reach Research Grade can be pushed to Casual, with the appropriate use of a Data Quality Assessment. Of course, observers “should” do better; of course, iNat “should” provide better onboarding and training; of course, not every observer will (or can) follow the rules. So, send observations to Casual if they don’t show the details necessary to identify the organism below the family level.

For the record, I have no problem with multiple observers uploading their own observations of the same organism at the same time, in the same spot. iNaturalist data cannot be used to estimate abundance, anyway, so why not encourage all the observers of an organism to upload a record of their own individual engagements with the natural world?

8 Likes

This seems to be a common assumption - that the only possible problem with multiple observations of the same organism is that it confounds abundance estimates (which as you say, can’t really be done anyways). But numbers of observations can be used as a very rough proxy for numbers. We do this all the time for plotting flight season charts.

Aside from that (as well as identifier frustration and time wastage), these duplicate observations clutter up external databases. The database I manage is interactive, and allows users to view lists of individual observations that contribute to the various maps/charts that we display on the UIs. Needless duplicates clutter things up. There are often major differences in the location descriptors (despite nearly identical lat/long). To the untrained eye, these can all look like discrete observations. At a certain point, you can’t see the wood for the trees.

I’m pretty good at weeding out the duplicates. I’ve created various tools for detecting them, but it’s a huge burden. I recognize that I’ll never succeed in discouraging folks from creating the duplicates, but the earlier in the process I can weed them out of my view, the less time I will waste on them.

1 Like

Yes! I’ve made this suggestion several times. I think even an automated count of “times reviewed” would be immensely helpful. Show me the moths that 30 people have already reviewed without getting them to RG, because that’s where I’ll find the oddball stuff that’s stumping people. Show me the grasses that no one has even reviewed once yet, because that’s where I’ll find some low-hanging fruit that I might be able to help with. I get that there’s not a perfect relationship between times reviewed and difficulty of ID, but as a general heuristic, I think it would be a helpful tool. The way it stands now, a lot of things in Needs ID are really on the borderline of “ID Cannot Be Improved”, but there’s some sliver of hope that someone doing a deep dive into that taxon might be able to work it out.

22 Likes

Why not leverage the fact that a number of people (some of whom may have significant expertise) have already looked at it?

In fact, having a button you can click to vote “this one is a head scratcher” might actually have some benefit for the person doing the clicking. They may feel better about moving on knowing that at least they might be saving someone else the bother OR they may be drawing the attention of someone who can contribute a better ID.

I guess the count would help for those cases where few or no people have reviewed the observation in question.

3 Likes

I asked for that - and it was rejected. In this Forum we can see ‘how many views’ - 22 users and 555 views - who cares. But on an obs in Unknown or Needs ID I would like to know (not who - because people objected to an invasion of privacy, and an unjustified judgement on our side of their reviewing) - but - how many people have already looked at this pixelated blur in despair ??

14 Likes

Maybe the identifiers need to band together and form a union. Maybe take strike action.

1 Like

I hope that 2026 will bring improvements for identifiers - there’s a ssp demo to try out today …

1 Like

Reporting of the number of “reviewed” clicks seems like it might be relatively simple to code. It might be useful, at least if people could understand that a mere 40 “reviewed” doesn’t mean the observation is too bad to ID. Lots of people hit “reviewed” for identifiable observations that aren’t the taxon or the area they’re interested in.

7 Likes

The actual number of reviewers would need to be treated with discretion. But - if it was visible on all obs - we would learn to use it.

1 Like

I moved the above posts from this topic because that topic had been marked solved and the conversation diverged in a quite different and coherent direction. I gave a general title to the conversation, but happy to alter it as needed.

2 Likes

On a personal response to the ideas above, I would say that showing the number of times an observation has been reviewed (and then IDers using this as a way to decide which observations to ignore) could incentivize users to delete and repost observations that get little attention to set their reviews back to zero and attempt to get a new look. This is a common tactic in parts of the web (i.e, selling sites) and users have created scripts to automate the process.

4 Likes

That could be mitigated by somehow factoring in when an observation was uploaded in relation to the views.

1 Like

there was some previous discussion here: https://forum.inaturalist.org/t/show-number-of-reviewers-for-an-observation/38623.

reviewers (as iNat defines it) is already captured and it should be relatively simple to display a count of those. views is not currently captured, and that would probably be harder to implement.

here’s my page that can show reviewer count: https://jumear.github.io/stirfry/iNatAPIv1_observations?user_id=pisum&quality_grade=needs_id&per_page=200&order=asc&options=reviewers,idextra. a lot of these observations with high reviewer counts are likely never going to reach research grade. but most of the of older observations have low reviewer counts, and i’m not sure how to interpret that.

2 Likes

In some of the caterpillar projects I ID in, when I filter for Has Disagreements and Random order, I have been able to dig up a lot of perfectly identifiable observations where the the pattern of the IDs is

  1. Initial wrong ID by observer (7 yrs ago)
  2. Correct ID by early arrival superidentifier in project (7 yrs ago)

I don’t know how many reviews those observations have gotten, but since I can usually think of 5-15 current prolific identifiers in the project who would be able to ID it instantly (and frequently @ one of them to get the third necessary ID in), I’m assuming that they have gotten very few views in the last 7 years. I think this is especially likely if the initial disagreement pushed the ID back to Lepidoptera.

4 Likes

that was actually my point in the main discussion. Wish I had the words you found for the problem when I worte the topic. My concern was about the numbers and the external use, and, of course, the extra work for IDers. Everything you said sums my point perfectly.

The title for this discussion is a bit misleading, tho. I thought this was about a “seen” count, or something similar.
I do like the idea of “needs expert ID” or “difficult ID” as something you could vote on.
On the other hand: If you don’t think you have the expertise, you shouldnt leave an ID either way, I think. Not sure if ‘experts’ would use “needs expert ID” as a filter instead of simply filtering for their expert taxon alltogether - in which those should come up too.

1 Like

If you are a specialist doing a methodical taxon sweep (or me working thru neglected old obs) you recognise - seen that before, observer’s name is familiar and NOT for good reasons. Obvs not all of them, but I have had someone upload the same photo next day - unfortunately for them. I’ve seen that, I remember you and it is still the boring ID not your exotic guess.

There is a window of opportunity, days perhaps, when an obs is new. Against the broad and easy path where obs are happily dumped in … angiosperm for example. Out of the ID-a-thon I scooped up angiosperms in the Western Cape I started at 73 pages with Date Observed Ascending and am down to 52 still in 2021 (page a day, another 2 months or so ?) Yesterday I found silver spoons ! Which had waited in the green waste bin for 4 years - but is a local Endemic with only 21 obs. (PS I @mention then delete my comment when it has served its purpose - sandraf ? and douglaseustonbrown ? those 2 are the taxon specialists !)

And also

We lack motivation for generalists to tackle those useless and ‘good intentioned following iNat’s declared guidelines’ broad IDs. There is gold in them thar hills, if not silver in Silvermine. My silver spoons live in Eden in the Western Cape ;~) @pisum how many reviewers for the silver spoons ? Minus me and the 2 that I pulled in, and the observer = who are the 4 we CAN see.

Rubbing salt in the wound. The linked forum post has 751 views and engagement by 18 users. I would rather see that views info for my silver spoons. Has this not even been looked at - worth an @mention then. Versus. Been there done that - keep moving along, nothing to see here.

When I was very new to iNat, I browsed through few thousands nonRG observations in my city haphazardly across every kingdom and I simply marked always everything in that page as reviewed because I browsed over weeks and I don’t know a easier way to see only things left to see on fresh days, which I could do then easily with default non reviewed filter. Basically I was using it as ‘viewed’ marking

Now that I realise a better way for that is to browse one exclusive taxa set at a time over its backlog, by sorting on observation date and keeping reviewed filter on (to not mess with paging results from ur fresh IDs) and just moving through URL pages and only marking those observations as reviewed which I want to revisit someday. My previous large reviewed set is gonna mess my markings usage now (so I now favourite mostly for short term revisit observations) but atleast I can do other filtering on dates, location, taxa for now.

1 Like

This is a point I made elsewhere - that people sometimes use “reviewed” in idiosyncratic ways, possibly because they lack other tools to accomplish the thing they are trying to do. We all have unique needs and workflows. I’m not against this kind of ‘repurposing’ in principle (I do it myself), but at a certain point, one needs to have the right tool for the job. If many people are resorting to an assortment of hacks to get the job done, perhaps it’s time to look at some kind of enhancement/redesign of the existing functionality.

The same argument could be made here. Using the number of times an observation has been marked as reviewed as a proxy for the “difficulty” of the ID is an interesting idea. However, several people have pointed out possible pitfalls. Since we’re mostly talking about observations that are already being looked at and are being pondered, it seems to me that we should take advantage of that. We should have some kind of “difficult ID” indicator that can be upvoted, as this would “capture” some of the effort that reviewers have already put into the observation. I don’t know if it’s practical, but if such an indicator existed, perhaps it could be used to influence the community ID algorithm in some way (require greater consensus for RG? - I haven’t thought through this part).

One pitfall of using a reviewed count that I just thought of:

For my region/taxon, I frequently see that when somebody posts an observation of something unusual/exotic, there are an unusual number of people who “agree” with the ID, even when it’s a straightforward slam-dunk (no question of what it is). I see a number of power-identifiers from across the continent chiming in with the exact same ID. Sometimes, the confirming IDs are added years after the observation goes RG. I have no idea why folks would do this - I NEVER feel the need to agree with an ID that has already been firmly established (I just click “reviewed” to get it out of my work queue). This might confound the attempt to detect “difficult ID” scenarios based on the number of reviews (since each agreement will also contribute to the reviewed count).

Sometimes, I see this kind of thing happening with ho-hum common species as well. I have no idea why certain observations for a common species draws a large number of identification agreement while others get far less attention. Perhaps it has to do with which projects the observations get added to. I see some observations that belong to a LONG list of projects.

2 Likes

That sounds slightly harsh. Almost verging on critical. I didn’t think you had it in you.

respect!