One pattern I’ve seen when an expert comes along who has particular skills in a difficult taxon is that they will often look at everything, regardless of whether it is RG or not – at least until they have some idea of how much clean-up the taxon requires (how many observations have wrong or unjustified IDs). Once they have done this (if they do not give up in despair), they may start trying to figure out how to prioritize their efforts and what workflows allow them to do so. Filtering for observations above species level may not be obvious at first, but it is also right there to be found in both Explore and Identify, unlike some of the more abstruse methods like url manipulation.
In principle, sure. But then we run into the problem of: who gets to decide when an observation is in need of expert attention and how do we know when that expert attention has been provided? I suspect that we would end up with something like what happens with the “ID can be improved” button now – people would use it to indicate that they want additional confirmation, whether or not the observation is particularly tricky or not, and it would end up not getting unchecked by the user or counterchecked by IDers because they either do not notice it or do not feel qualified to make that call.
Possibly something like a “difficulty of ID” ranking that people could vote on for individual observations might be a way to provide more advanced filtering options that would be separate from the “needs ID” question, but there are plenty of questions that would need to be worked out in relation to such a feature.
That is why I said who reviewed it.
If the 2 taxon specialists I would have asked for Xylocopa have already looked at it (but didn’t leave IDs or comments) - I don’t know that, so I ask them anyway.
Wanted Dead or Alive
I will repeat: What would be the benefit of seeing who has marked an observation as reviewed? This is an idiosyncratic decision that has more to do with personal workflows than whether the observation can be ID’d more specifically. Unless you know that particular user’s reasons for doing so, it tells you absolutely nothing.
If you tag me on an observation of a taxon where you know I have local expertise, I might or might not have any insights to offer. This applies to observations that I had seen and not ID’d because I did not feel like I could add anything useful at the time exactly as much as for observations I had not previously seen at all. Interests change. Sometimes we notice different things on a second look, or we have learned more since we originally looked at it.
Marking an observation as reviewed is not a permanent judgement of an observation any more than an ID is – it is merely a tool for managing one’s workflow at a particular time.
I find that I am very unhappy with the idea that when I have chosen not to publicly interact with an observation and instead privately marked it as reviewed because it happened to suit me at the time, that this would be interpreted by others as saying anything about the observation. If I wanted to express thoughts about the ID (or the impossibility of making one), I would do so, by adding an ID or a comment. If I don’t, that is nobody’s business but my own. Don’t impute an opinion to me when I was doing no such thing.
I also don’t think “reviewed” means much about identification. I mark almost all fungi in my area “reviewed” just because I can’t identify them and want them out of the way. Knowing that I marked a sedge “reviewed” would tell you something about the set [usefulness of picture + my ID skills] but not necessarily about whether the sedge could be ID’d by someone else.
I think that for unidentifiable pictures it would be good to write a comment about the problem, e.g. “need photo of genitalia” or “need photo of perigynia” for identification. Obviously, that won’t stop all overconfident individuals from putting an ID on the observation, but it will help alert the people using it to the problem.
We actually do that with other identifiers for Convolvulaceae in Europe. It is quite effective, since when we see that note we know the other person didn’t see the traits needed for a species ID too in the picture, and it’s easier to “tick the box” afterwards if we agree.
It also helped with some prolific observers, that now know which traits they have to photograph if they what to have something better than a genus ID.
Indeed, I doubt “reviewed” is a good indicator. For example, one of my first steps when tackling casual observations is to filter for and mark reviewed all observations without media. I do this so they won’t show up any more when I subsequently filter for casuals that are not captive to catch and comment on e.g. missing dates etc. So a large chunk of my personal “reviewed” pool probably consists of obs without media by now. Almost all of these I haven’t even looked at (since there’s nothing there to look at), just marked them in bulk as reviewed.
To comment on the original post, I often find myself thinking it would be great to have some indicator of confidence level on IDs to distinguish between “most likely” and “almost certain” IDs. Personally, I try to indicate with a comment if I’m making a “best guess” initial suggestion. The hope is that the next identifier who comes after me will take that as a note to look carefully and consider other options before confirming. If there was a more formal way of doing this (e.g. a checkbox for indicating identifier confidence), I would make use of it.
Having done IDs for a while, I have of course noticed many observers, especially new ones, uncritically confirming any IDs that get suggested on their observations. So I’ve gotten more cautious and if not sure will just add a genus ID and put my “best guess” species in the notes. That way if they uncritically confirm the ID it doesn’t take it all the way to RG yet. It also means if someone does put a finer ID, I will get a notification and can recheck.
If the observation is already RG, I try to assess how likely I think it is that it’s actually something else. E.g. if there’s a half-and-half chance it’s one of two species, I may disagree with a comment it could be species B instead of A. If it’s more of a 90% correct but maybe 10% chance it could be something else, I may put a non-disagreeing genus ID with a comment that it could also be something else.
If the picture is so bad that I can’t tell based on that, I try to assess how likely it is that the species ID is correct based on location/date and rarity. If it’s out of range or out of season or a rare species, I may put a disagreeing ID with a comment that the initial ID is unlikely and needs further evidence. If I think the chances are good that it may be correct and the data point is not an outlier compared to others, I may just mark it reviewed or add a comment asking if they have better pictures or field notes on identifying features.
I’ve done this, as a sort of courteous “FYI” kind of thing - thinking it would be nice to give the observer an idea of what it probably is. Then the observer (or someone else) enters the species I suggested as the ID.
For “reviewed” - I use it completely differently. In my world, “reviewed” means it’s a “good/usable” observation. I use it as a way of filtering observations that I regard as usable for the database I manage. I really have no other way of marking observations as “approved”, short of agreeing with the ID on every single one. I tried punting “bad” observations out of our project as a way of rejecting them, but they end up back in the project on the following day.