Use of "It's as good as it can be" in the DQA

In this case, it is good to go to the “Data Quality Assessment” panel, line “Based on the evidence, can the Community Taxon still be confirmed or improved?” and check “No, it’s as good as it can be”.

Update on 13th Feb. 2023: of course, if the observation has no identification yet, first add an identification, else the “as good as it can be” would not make sense (and I suggested to disable this DQA for observations without identification).


Yes, usually I remember to do that as well.

I really don’t like that checkbox or its cousin, “Yes”. It’s asking for a value judgement, rather than a statement of fact. Personal preference of course, I am sure it has its uses, but it seems to be misused quite a lot.

Yes, in particular its misuse for multiple-species observations, that get fixed later, but remain flagged as “No, it’s as good as it can be” although a much better ID is fairly possible. Check this project to see how many there are :

BTW, please vote for this feature request that can prevent a misuse of the “Based on the evidence, can the Community Taxon still be confirmed or improved?” DQA:

1 Like

The frequently used responses post now includes statements like “make a comment if this is corrected, so I can fix my ID” for issues such as multiple species and inaccurate location.

I feel that if I am going to DQA something, I should also fix this when it is corrected. But many observers don’t comment. Asking them to might help.

1 Like

I looked through the observations in the project, and mostly they seem to ones that people have judged to be duplicates. I didn’t see any that looked like they used to be multiple species but are now fixed.


Yes, or a new feature that resets automatically part of the DQA infos after a significant change in the observation. When there will be a DQA for “multiple-species observation” (vote here), I suggest to clear it automatically once the observer has removed a photo (instead of relying on a manual comment + manual action… waste of time!).

Whenever possible, automating things will always be better than relying on people for doing something additional. And much better than relying on several persons for doing several actions.

Thanks for the Frequently Used Responses, I will remember it.


Quick check of one in Cape Town - multiple, garden plant, from 2020 - that observer will never respond.

I’m wary of automating multiple is now one species. I sometimes have to study the photos carefully to decide what are we looking at - is it maybe 3 in focus pictures of a caterpillar, rather than focused on the plant behind and then the one in front. Plus we have an identifier bias, he sees the caterpillar, I see the flower, and she sees a tiny something arthropod … And eventually the observer comes back and says, actually I was interested in the fungal infection. And we all start again …

PS checking thru the African ones - and done


I only looked at a few ones, and several of these were former multiple-species. Then, I fixed the DQA section (saying “Yes” the Community Taxon can still be improved) and I removed these observations from the project. This can explain why we didn’t see the same things in this project. (And I may have over-extrapolated.)


If I could across a duplicate where the observer was asked a month or more ago to delete the duplicate, but hasn’t, I just go ahead and ID it. Should I just be marking “as good as it can be” instead? What if it’s an Unknown?

What is the appropriate response/action for a user uploading multiple duplicate images, organisms?

1 Like

Interesting - thank you for finding that thread. I don’t think I’ve read that before.

But I’m still confused - it sounds as though flagging duplicates will move them to Casual, but then the flags never get resolved. Is that the current way iNat staff want identifiers to deal with the issue?

Also, as an identifier of old Unknowns, I may never see that an observation is a duplicate, as the first observation in a duplicate series may well be identified already, leaving only a single photo for me to find. What happens then?

For what it’s worth, what I do now is this: If no one has asked the observer to delete the duplicate, I do that. If I see that another observer has commented only something like “duplicate,” with no request to delete, I try to remember to ask the observer to delete. If the observer has not responded to requests to delete for a month, I just go ahead and ID the observation, as I thought there was no other course of action.

To get back to the initial question of this thread, I’ll add something else that I’ve learned from IDing Unknowns: how hard it is to design a system (like iNat) that has a simple, useful, and rational response for every conceivable situation that may arise.


Please do not flag duplicates. This was a practice years ago, but staff has reiterated that this should not happen. It just creates a massive list of flags that nothing can be done with that curators have to sort through to find the flags that do require action. I would also not mark “good as can be” as I don’t think that accurately applies.

I totally sympathize with the issue as I find duplicates very annoying. In my opinion, the best way to deal with them is to just ignore them. I leave my standard comment saying that the observations is a duplicate and asking the user to delete. I do not ID so that it does not become RG. Then I tick “reviewed” and move on.

Most dupes are made by new users and they learn not to make them (but they don’t often delete their dupes). If a user continues to make lots of duplicates, even after it has been explained to them in comments or via DM, and/or seems to be doing so in bad faith, you could consider contacting


Thanks for the clarification - that makes sense to me. Part of my wish to have a “solution” is that I want all of the Unknowns to have at least an initial ID. But that’s clearly an impossible task! Plus there are observers who, for perfectly good reasons, upload observations as Unknowns and it might take them months to have the time to go back and add IDs. So, I should accept that some Unknowns will simply stay as Unknowns.



If you open the obs, and glance at their recent obs gallery along the bottom - that is when my autopilot sees … the exact same photo, either with an ID or still waiting. Copypasta duplicate at …


Stuff don’t care, for them it’s not a problem at all, it’d be easy to add DQA for them, but they can’t do it for years, they even said duplicates are not problematic. =/

1 Like

Please consider voting for this feature request:

Duplicate prevention: Notify observers if their image checksums match others on the site.


I just ID the duplicates as well to move them out of the “need ID” pile ideally… so nobody else needs to bother with them. Not a fan of just leaving them there waiting for the next IDer to stumple over them… and the next… and the next…

If they are not a big issue, then it should also not be a big issue to just get them IDed and move on.


Me, too. If it’s OK for 20 students in a class to each photograph a particular frog, say, then it should be OK for one person to make a small mistake and upload the same photo twice. Or, for that matter, if I happen to walk the same trail two years apart and photograph the exact same out-of-the-ordinary plant. Or walk the trail two days apart and photograph the same plant because I’m interested in phenology. Any serious researcher using iNat for questions of abundance should know they will have to deal with double-counting and biased sampling. Sure, the observer should be asked to remove duplicates, but if they haven’t removed them after a reasonable interval (I generally use a month), then identify the duplicates and move on.


Which technically are not duplicates anyway.