Fully agree with this. There are certain plant species that I can ID from colour alone, regardless of how blurry a photo is; there’s nothing else with that colour scheme that you’re going to find in that area.
Some trees have pretty distinctive shapes as well, although the one that immediately jumps to mind is Acer pseudoplatanus, which does look tremendously similar in shape to Aristotelia serrata. I’ve come across a blurry photo before and been unable to tell if it’s a leafless sycamore in winter, or a dead wineberry in winter.
I’ve seen enough examples of experienced observers (and even experts) misidentifying perfectly adequate photos that I don’t put much faith in what folks say they saw (which would include a drawing). If folks make mistakes IDing photos that they can look over and consider at length, how can one trust their recollections of something they might have briefly glimpsed some time in the past? There is plenty of research demonstrating that we should take eye witness testimony with a grain of salt. To my mind, an observation with a photo that is blurry to the point of being ambiguous is equivalent to an observation with no photo at all. If an observation that doesn’t contain a photo is defacto “casual”, I don’t see why we should treat one with an ambiguous photo any differently.
(note that I exclude cases where a poor/blurry photo still contains enough information that a knowledgeable IDer can say “yeah, it’s blurry, but it can only be X”)
i used to add photos knowing they weren’t diagnostic, long ago, but these days if i want to observe something but no know one will be able to verify based on the photo, i just do a photoless casual observation.
Who determines “prematurely” ? I don’t like marking higher level IDs with “as good as it can be”, but I also don’t like it when folks with little to no expertise in a taxon come along and put a species level ID on an observation just because they feel it is languishing in the “Needs ID” state. I’d be OK with the observation remaining in the needs ID state indefinitely, but there seems to be an ongoing crusade to ID these observations, whether the IDs are accurate or not. (note that I’m talking about observations that have had several people put higher level IDs on them, with explanations)
So what’s the right answer? I don’t think we can have it both ways. Perhaps we need something let drastic than “as good as it can be”. Something that keeps the observation out of the queues of folks who are just looking to clean up Needs ID observations. Essentially, a flag that says “this observation has been looked at by folks with some expertise in the area. it’s still in the Needs ID state, but that’s OK. unless you’re a taxon expert, you don’t need to look at this”
I confess that I will sometimes put an ID on an observation where the supplied photographs do not have enough detail to define a individual species. I will only do this if I am familiar with the organism in question and have taken other factors into consideration. One of my fields of interest and expertise is Australian Eucalypts which are a very diverse and widespread group. As an example, I may be looking at an observation which can be narrowed down to 3 possibilities based on leaves, fruit bark etc. In this case I will look at existing topography, soil and/or geology mapping for that site, as well as existing records from that area, (records can be from iNat or other sources). Because as with many plants Eucalypts are often very site specific, I will make a call. In all cases such as this I will explain my reasoning in the notes. I further confess that I have made confident IDs on all sorts organisms in the past, only to have someone else change that ID by pointing out things that I missed or was not aware of.
I think I have posted an image of an organism that may not have been identifiable. But because I had seen it with my own eyes and I knew what it was, I added the ID. Because it was fast moving and running away I didn’t get a clear photo. But I did add photos of tracks and scats to help with ID.
Don’t you mean higher taxon? My point was that the problem with making the observation go RG at a higher level is fairly drastic, as it will go off the radar for many people. I was suggesting something that keeps the observation at Needs ID, but indicates that the observation is not simply sitting there because nobody has looked at it. Maybe some sort of intermediate state like “Needs Expert ID”. It might kill two birds with one stone:
folks who are just trying to help clean up the Needs ID backlog don’t waste their time on it (and possibly mess up the ID)
if an taxon expert ever does come along, they will have a ready made and easily accessible queue of observations that are in most need of their attention
My understanding is that “as good as it can be” means that the ID cannot possibly be improved because relevant information to arrive at an ID is missing. For example, the specimen would need dissected to identify, but the observer did not collect it, the identifying features are on a side of the organism not shown in any pictures, or the specimen is an immature that could only be identified if it had been reared to adulthood. In my opinion, clicking “as good as it can be” means that you know the particular species options very well, and you know that this photo is inherently unidentifiable. There are plenty of photos of tiny obscure critters that might only be identifiable by one or two experts in the world- and they might sit there for 10 years in Needs ID waiting- but I believe these should still be in Needs ID. After all, they still need an ID, and no one who’s looked at them has known enough to say “relevant features are definitely missing in this photo”.
I understand though that it would be helpful to have some sort of limbo-state between “Needs ID” and “RG/Casual” due to an “as good as it can be” vote. There are plenty of cases where I look at something and think “I know of one taxonomist in Canada who could maybe ID this, but I doubt anyone on iNat will be able to”. I would never vote “as good as it can be” on these, because I lack the knowledge of whether that one guy could ID it or not. But yeah, it’s annoying to know that the observation will clutter up all the other identifiers’ queues for years when it’s very unlikely anyone will be able to name it.
So yeah, by “prematurely” I mean someone who doesn’t in fact know if and ID is possible who votes “as good as it can be” because they personally think the ID would be very difficult, without definitively knowing what features are needed but not showing.
Maybe this doesn’t happen with the taxa that you work on. As I said, I’m content with leaving problematic observations in the Needs ID state indefinitely, but the problem is that folks trying to clean up the Needs ID backlog come along and try to ID observations like these, even though they don’t have any special expertise in the area. They probably think “oh, it’s obviously X”, or maybe they just rely on the AI, not knowing that there’s several cryptic species in play (even though there are often comments clearly spelling out the conundrum).
I did this with a dragonfly the other day. The photo is obviously a dragonfly, although it’s very blurry because it was zipping all over the show, several metres up, and I just had a phone. Being large, black-and-yellow, there’s only a single genus of dragonfly it could be at this location. Sadly it can’t be ID’d to species ever, because I don’t have that knowledge, but it’s certainly that genus.
I’m working on European Xylocopas, which are often not IDable from photos and currently have rather dynamic ranges due to climate change. I have found that people will add species-level IDs in such cases – sometimes to old observations – even if the observation has been made RG at genus and even if there is discussion in the comments about the ID challenges. So such IDs do not seem to be motivated purely by a desire to help reduce the number of “needs ID” observations.
(I have been fairly liberally marking the distant Xylocopa blobs and the ones with their heads buried in flowers as “good as it can be” at genus even though I am aware that there might someday be someone out there who can ID some portion of these to species. But I figure that if such a Xylocopa guru comes along, there is nothing to stop them from reviewing all the observations that are RG at genus. In the meantime, there is no easy way for the rest of us to quickly tell whether any given observation has been reviewed by someone who knows what they are doing or whether users have randomly chosen and agreed to one of the two or three possibilities. With 30,000 or so observations, this is a problem. We need to find a way to divide our labor instead of everyone reviewing some random subset with the result that some observations end up reviewed by 10 people and some by none. So making those observations RG that are unlikely to be IDable by any of us can help IDers direct their energy more effectively.)
I haven’t noticed that happening, but that could be because in the taxon/region I work on, we generally haven’t been marking observations as RG at higher-than-species. I feel like I’m one of the few people who actually reviews observations that are already RG (at species level), but it could be that if we had more higher level IDs at RG, there would be folks looking at those observations.
I am willing to concede that in the absence of a hypothetical “needs expert review” state, this is may be the best solution (vs just leaving tricky observations in the Needs ID state).
We need to remember that a hypothetical expert who may come along in the future to sort out these tricky observations may have limited time, and may not know all the ins and outs of filtering observations. They may not think of checking observations that are RG at a higher than species level (they may not realize that such a thing is possible). Having an state that explicitly identifies observations as being in need of expert attention might be a better way to go.
I also feel that if I start using the “as good as it can be” option more often, there will be folks who will consider it over-reach (as in “just because YOU feel the ID can’t be refined any further doesn’t mean that NOBODY can”). Note that for the most part, I’m fairly comfortable with making these calls, it’s just how others may react that gives me pause. There’s a tendency here to label any activity that smacks of assertiveness/initiative as “bullying”.
yeah i do wonder if some way of tracking review frequency, at least internally, could lead to targeted searches- identifiers could search for observations that have never been reviewed if they don’t know the taxa as well, and ones that have had many reviews if they are experts in that taxa. The latter could focus on those for determining if further ID was possible or not. And it should count the genus level (or whatever) IDs as diagnostic.
For example, a certain portion of the people who are looking at the recent iNat Year in Review are looking at the Frosted Phoenix observation, and adding a confirming ID. Of the 39 IDs, probably only a small handful of people actually have the expertise to ID a Frosted Phoenix. The same thing happens with every observation that gets famous for any other reason…
Most of my American Alligator observations have 6 IDs. I stopped identifying birds pretty soon after I started because the options were either unidentifiable specks, or common easy species with 4-5 confirming IDs already. Not a practical use of time…
No, this is not a case where function to see how many people have reviewed an observation would be useful.
What I mean by “reviewed by 10 people” is 10 IDs by people who know something about Xylocopa identification. There are also observations that have just as many IDs by people who don’t know anything about Xylocopa identification (because they assumed that the photo looked correct so it must be that species or weren’t aware that there were other possibilities or whatever. They are big, conspicuous bees which often seems to result in a belief that there can’t possibly be other lookalikes).
A feature to see how many people have reviewed an observation would not be of any use, because it would not tell us who reviewed it – in other words, exactly the situation we have now, where I can see how many IDs an observation has, but I have no way of knowing who ID’d it unless I open it (and then many people will just add an ID since they are looking at it anyway).
It doesn’t necessarily accomplish this, but I’ve recently started sorting by “Date Updated: Ascending” for my moth IDs. Since newer observations seem to get the most attention, I figure this will at least get me looking at stuff that’s been ignored for a while. There are some super easy IDs in the old stuff too, ones that the CV will even nail now that it wasn’t trained on back when the observation was last identified.
A count of how many people have marked reviewed an observation would be a neat statistic to see though. The only way I can see it being misleading is for things where different identifiers specialize in different life stages. I sometimes mass “mark as reviewed” caterpillar observations to get them out of my adult moth workflow, and I’m sure the caterpillar folks do the same to adult moth observations too.
I wrote out some of my reasons why I don’t think a count of how many people have marked an observation as reviewed would tell us anything meaningful here.
(If people were more consistent about marking life stage for insects, you could simply use this as a filter instead of marking caterpillars as reviewed.)
The url snippet for excluding observations ID’d or reviewed by a particular user is the best concept I have come up with to help IDers avoid duplicating efforts (e.g., if user A has ID’d the observation I don’t need to prioritize looking at it). This does require some extra work from IDers (either bookmarking/using a link or manually changing the url each time) and I believe this doesn’t work as an “or” query (i.e., observations ID’d by either user A or user B), which limits its usefulness unless there is some additional coordination (dividing up by regions, with one or two users who have reviewed a significant portion of the total). But it seems like a starting point for organizing a coordinated clean-up effort.