On any given observation, it’s obvious when folks have added identifications, but it’s less obvious when identifiers have reviewed an observation without adding an identification. For an observation that is hard to identify, I bet that the lack of identifications can make it seem like no one is looking at the observation, when in reality, lots of people may have reviewed the observation without actually identifying. I’m thinking that a count of reviewers displayed on the observation page might provide a useful indicator of activity. It could let observers know that there are people reviewing the observation, even if they aren’t actually adding identifications. It might also give identifiers an idea of how difficult it could be to identify an observation.
Additionally, if there was a way to filter by number of reviewers, that might provide ways to more effectively spread the efforts of the identifier community. For example, it might be useful for an active taxon expert working with other identifiers of the taxon to look only for observations of their target taxa that have been reviewed by a minimum of n reviewers. This would provide the other less experienced identifiers a chance to get a first look at the observations. Or maybe an expert might want to look first at needs ID observations that have been reviewed by a very high n minimum reviewers to find the hardest to identify observations available, before moving on to easier identifications. Conversely, maybe a new generalist identifier might not want to tackle observations that have already been reviewed by, say, 20 other reviewers, since it might be unlikely they could do anything with those.
Finally, it might be useful for the system to automatically mark an observation as Can be Improved = N after a certain number of identifiers have reviewed it (assuming this request related to Can be Improved is rejected). I’m not sure what the appropriate number for this might be, but if I had to guess, I would say that after 100 reviewers, it might be time to mark an observation Can be Improved = N.
What do you all think of these ideas? Do you have any ideas of your own related to how number of reviewers could be leveraged?
Show the number of reviewers on the observation page?
Yes, and provide an option to list the reviewers, too
Yes, but don’t provide a way to show who the reviewers are
It doesn’t matter to me
Provide a way to filter by number of reviewers?
Yes, and I would use such a filter
Yes, but I would not personally use it
It doesn’t matter to me
Automatically set Can be Improved = No after n reviewers has been achieved? (n needs to be determined, but it should be some relatively high number)
Of course I immediately hit the wrong answer while scrolling for the first one. Maybe there should be a feature request to withdraw an accidental vote on a poll? I meant to hit the second choice, rather than the first one, though I’m not sure if it’s in the same order for everybody or not.
Haha, I would change my vote for the first one too - I didn’t realize that it was asking about showing reviewers. I definitely would not want the “show reviewer available” - it just seems kinda creepy for other users to be able to see all of the observations that another user has reviewed to me. It would probably discourage me from clicking “reviewed” if I knew that it was a de facto activity tracker accessible to other users.
Also, side note, but I wonder if something like this would really increase iNat database size? Keeping track of each reviewer for an observation could mean adding tens or maybe even hundreds of usernames to a field for each observation (I really have no idea how many people review most observations).
One more question: I am not sure if reviewed is necessarily a good proxy for number of eyes on an observation (though it might be). Some users don’t tick “reviewed” even when they have, so showing this number might actually convince users that fewer people are looking at their observations. That is, new users might interpret it as “page views” and think no one has seen their uploads, when other users may have, but just not formally hit reviewed.
i’m not sure how indexes work in the system exactly, but making this information searchable might increase the size of required indexes.
i’m not sure if system is already tracking views. so i wouldn’t want to add that kind of burden to the system. also, i think views more than anything feeds egos, whereas reviews shows that others are helping. so i am purposely talking here about reviews rather than raw views.
There are a lot of good ideas here. However, the proposal doesn’t really elaborate on when or how an observation becomes a reviewed observation. Is it when a potential identifier ticks the “reviewed” checkbox? One problem with that it is (too) easy to “mark all observations as reviewed” without actually reviewing them.
Yes! I did figure out how to change it. Googling showed that plenty of other people have had the same question about Discourse polls…:)
I didn’t know that it was accessible via API, but that feels less weird to me as 99.9% of users won’t see it.
Yes, I understand the proposal is focused on reviews, but I was trying to specifically address one of the proposed benefits/raisons d’etre for listing reviews which was:
Here, I think at least some (and maybe most?) new users would interpret “reviews” (if it were posted) as “looking at” as in the quote above but they aren’t equivalent. Page views is a concept users would be most likely familiar with (as opposed to the iNat specific “review” function) and is a likely user interpretation. “Reviews” will always be either less than or equal to page views (I think?), so I think that there’s a chance this could leave the impression that fewer people are looking at an observation than actually are. If so, showing # of reviews might have not have the intended effect (or even the opposite?).
right. consider this all just brainstorming at the moment. i think a final proposal and implementation would add a little information indicator next to the “review” label, and if a user clicked on that, a pop-up would provide a detailed definition of exactly what “reviews” mean in the context of the system and how they are accumulated.
Going through Needs ID: I check reviewed on anything with issues that I don’t want to see again or commented on without any other action, e.g. multiple species in one observation or duplicates, blurry pictures, joke observations, way outside my expertise, etc.
Going through RG observations to find those that may have been misidentified: I check “reviewed” rather than add another ID on those I don’t see issues with. (I may change that though based on discussions elsewhere about IDs getting lost when someone deletes their account.)
Going through casual observations in my area: Since casual can’t be separated into “Needs ID” and “RG” subpools for now, I checked “reviewed” on those that already have a CID rather than adding like the 6th agreeing ID.
Edit to add 4. Forgot this one: I also mark reviewed observations I come across where the user has opted out of CID.
On my own obs, as the observer, I would like to see Seen by
with a number, and a list.
Say it’s an orchid - I would like to know local orchid experts looked at it - and moved on without an ID. Probably inadequate pictures? Or difficult to ID without seeing …
But when I am IDing - I would also like to see, that number and that list.
My own IDs that I value most highly are when I have cautiously added a broad ID - say Coleoptera - and one of my trusted identifiers agrees, at the broad taxon.
Seen / Reviewed by … is another layer of info for IDing. Could also be used for don’t @mention him, he has already looked at it.
This is no more like stalking, than adding an ID with your name. Identifiers are busy, not all of them choose to use one of the current visible signals for ‘I was here’
I doubt that number of users who looked at a given observation gives a reliable measure of whether it can be improved or not. There is a mix of beginner users and experts and everything in between. There might be X people having reviewed an observation without IDing, but none of them happen to be skilled in that particular taxon. Besides, not everyone behaves on the site in the same way. For example, I do or don’t review observations which I have seen but not identified and it’s rather random.
Not necessarily. One can click on “mark all as reviewed” on an Identify page without actually clicking through to any of the observations, so an observation could be marked as reviewed by identifiers who haven’t ever opened the observation page.
When I’m looking at my search results, I scan for observations that interest me or that I might be able to identify. After reviewing those observations, I click “mark all as reviewed,” on the entire search result page, because I don’t need to see that group of observation again. But I haven’t actually reviewed some of them. So reviewer count could be skewed.
No. Absolutely not. Why? “Reviewed” doesn’t mean anything useful. It doesn’t mean “I agree, so why add an ID?” It doesn’t mean “It can’t be identified.” All it means is “I don’t want to see it again.”
When going through observations from my county or adjacent counties, I often mark the mushrooms “reviewed” because I’m no good at identifying them and I’m likely to go through those observations again later, hoping I’ve learned enough to identify some. I’ll never know enough about mushrooms to be able to help.
I periodically do a search for non-vascular plants in my area and mark them all as reviewed, because I don’t know how to ID them but I do want to see every thing labeled “Plantae” that’s actually vascular.
That’s a narrow point of view, it is absolutely usedul to visit observations with close to none reviews from years ago, id them kr mark as cannot be improved.
It’s not linked to whether you know or don’t know the group!
I often mark things as reviewed without trying to ID them because I know they are something I’m not good at IDing and don’t want them to come up again. If a whole bunch of people do the same it’s not indicating that it’s a bad (not enough detail to ID) observation. So I don’t think auto marking stuff as can’t be improved is a good idea.