Show number of reviewers for an observation?

As others have commented previously, clicking “reviewed” may not be the best way to do what you want. I usually click “reviewed” without adding an ID in “I give up” situations, so cases where I can’t help, or where it looks like my help won’t be appreciated, or where I’ve done my best and can’t do more.

What I do when I see something cool that I’d like to ID but can’t currently figure out is click the “fave” button so I remember to keep an eye on it or try again later. So-- what I’d really like is buttons in addition to “fave” for things like “unusual species” or “check taxonomy” or “excellent exemplar”.

2 Likes

I’ve never intentionally clicked “mark all observations as reviewed” but I have clicked it by accident more than once. It’s a royal PITA to back out of this.

1 Like

actually, i think what you’re describing here is exactly what i’m hoping a count of reviews would reveal.

i can see that intention is not captured exactly in the reviewed flag, but wouldn’t knowing the total review count on an observation generally be more helpful than not, especially in comparison to similar observations?

1 Like

While I understand where this desire is coming from, I certainly do not want others to be able to see whether I looked at or reviewed a particular observation while roving about on iNat. I would consider it an invasion of my privacy tantamount to publishing my web browsing history. If I specifically choose to interact with an observation (i.e., adding an ID, comment, annotation, DQA etc.), then sure, record that it was me. Otherwise, the fact that I visited an observation could be due to any one of dozens of idiosyncratic reasons. Whether I add an ID or not likewise depends on lots of different factors, many of which are not directly related to whether the features necessary for an ID can be seen.

I think allowing users to see the number of people who have clicked “reviewed” on their observations might also have the opposite of the hoped-for effect. I’m familiar with the experience of uploading observations that just seem to sit there, and wondering if it isn’t IDable or whether it’s simply that nobody has gotten around to looking at it. But would seeing that x people reviewed it really make me feel any better? Wouldn’t it be more likely to be discouraging? (People looked at it but didn’t provide an ID. What did I do wrong?)

Like others, I’m not convinced that the number of people who mark an observation as reviewed is a good metric for determining whether the ID can be improved or not. The ways that people use this function (or do not) are simply too varied.

I generally use explore rather than the identify module for IDing, so I’m generally not deliberately marking anything as reviewed. It isn’t a tool that is useful for managing my particular workflow, and in fact I find it annoying that iNat automatically marks stuff as “reviewed” in response to certain actions. I also do a fair amount of adding broad IDs, so me marking something as “reviewed” in such a context would be pretty meaningless to others – all it would indicate is that it is a taxon I don’t know much about.

3 Likes

You can easily mark it again and unreview the page.

taking an observer’s perspective, my thought is that if i know that it usually takes n reviews for similar observations to reach research grade, and my observation has far fewer reviews than that, then i know that i probably just need to sit tight and wait for more eyes. on the other hand, if my observation gets far more reviews than n, and it still hasn’t reached research grade, then that’s when i start asking for help / guidance / feedback.

actually, i think this the kind of thought process that i’m hoping a count would initiate, except that i would frame it more as “how can this be improved?” rather than “what did i do wrong?”. someone who was still learning could post a comment on their observation saying something to the effect of: “i noticed that n people reviewed the observation but there’s only 1 identification. is the evidence here not good enough to further confirm or refine the observation? what kind of evidence would be helpful to include in the observation to boost the chances that this can be identified better?”… and hopefully someone would see that and provide some guidance.

3 Likes

Thinking of analytics on youtube, for example, one can get data about source traffic. On iNat, we might be curious to see view counts separated into multiple source criteria. It could be as simple as “Identify mode” vs not. Or it could be detailed, such as Identify mode vs not identify mode based on chosen parameters, such as random, custom URL, map, etc. Not saying this is feasible or being requested–just that we would be interested in these data

3 Likes

I don’t want to see this in real time like on a Reddit post. I like to think of this as a lone pursuit, at least at the moment of entry.

And for those who don’t want to be listed
A personal setting, similar to showing obscured location to project curators.

Show my name on reviewed list - yes or no.

Except that – as I and others have pointed out – clicking reviewed doesn’t necessarily correlate with whether an observation can be ID’d. Just because 5 people have looked at an observation doesn’t mean those 5 people are the ones who are capable of helping with the ID. Presenting this data numerically gives a misleading impression that each “reviewed” means the same thing. But given that we can’t know why someone clicked “review” in any given case, it isn’t possible to reliably interpret such an number.

You are taking a statistical approach – the average number of reviews for an observation. But this is going to vary vastly depending on the particular taxa, the skill of the IDers and difficulty of the observation and the ratio of active IDers to observations in a particular area.

I doubt most users, particularly newer ones, are going to see numbers and interpret them quite so dispassionately. I was trying to get at the emotional aspect – it’s disappointing when an observation doesn’t get responses, and some of us are insecure and prone to wondering if what we are doing matters. Evidence that people have been looking at the observation and not providing an ID (rather than just being too busy to get to it) can quite easily emotionally translate to “people don’t want to ID my observations” or “my observations aren’t good enough”. In such a case, one is likely to feel rather vulnerable and proactively asking about the reason for the lack of IDs is not necessarily going to be a comfortable thing to do.

2 Likes

From my own identifying I know, as people have said, that reviewed means many things. As the observer, I would find it reassuring the people have seen and ‘looked at’ my obs. Getting an ID takes patience. Whether you are adding it, or … waiting … for it. I IDed an obs from 2012 yesterday (I wonder how often that one was looked at across the intervening ten years?)

5 Likes

Yes, looking at a thumbnail in Identify wouldn’t count as a page view, but neither would the full observation view in the Identify modal either (I don’t think). And I would guess that this is how most identifiers ID (either agree at thumbnail or via Identify “close up”). I guess I should have said “looked at” rather than page views of the observation specifically.

The broader point is that there are lots of people who may have looked at or examined and passed on an observation without having reviewed it, so I’m doubtful that # reviewed is a great metric to try to convince users that their observations are getting interest when it will almost always underestimate the number of users that have looked at (by whatever means) that observation.

One other example for how this might happen is if an identifier only adds a DQA field for an observation (like thumbs down for Wild). This doesn’t trigger an automatic “Reviewed”, so a beginning user might think that their observation had been sent to casual without anyone reviewing it (which could be even more frustrating or confusing than it currently is for them). They might feel like iNat itself is hiding their observations without any human input (maybe?)

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.