Really, instead of the binary “reviewed’ flag, there should be a flag that can take on multiple states. For example:
‘ok’ - sort of like what “reviewed” does now - but the “meaning” would be up to the individual user
'watch’ - observation needs to be revisited/checked for some reason
‘ignore’ - hide from view, unless the user is asking to see observations with this flag
the system already stores a ‘reviewed’ flag. It shouldn’t be that much work to make it a multi-state flag. I would bet that people who identify a lot of observations would appreciate this kind of flexibility (vs the various hacks that they are forced to use in its place)
Can you filter observations based on that? I’m guessing you can’t set it from the thumbnail view, as you can with “reviewed”.
Like I said, ideally, I’d like something that I can set/unset easily, and can use to filter my view.
But I’m starting to think that maybe ID’ing observations within iNat is a mug’s game. It would probably be easier and less frustrating to just download all the observation data as it is, without trying to get it “right” before doing so. Then I do a first pass of filtering to remove all the likely duplicates, observations from nuisance observers, etc. before going any further. Then I could run a program that will bring up each observation on my screen so that I can choose to include/exclude the observation, and update any ID/annotations information I think it incorrect. In other words, I would do the ID/annotation work when it goes into my database rather than on iNat. That way I could focus my efforts on the useful observations.
That seems like the thinking man’s solution. I’ve been banging my head against the wall long enough. The good folks here in the forum have convinced me that for years, I’ve been going about this all wrong. I should never have tried to ensure that the data was correct at the source.
Sorry im not sure if I understand your suggestion, wouldnt that be practically the same as a reviewed button?
for me reviewed is like saying, yea I saw this and I dont wanna ID it (impossible to ID satisfyingly, I dont find IDing anything as ‘life‘ useful so far)
so an ignore button would be like, hmm yea I saw it but moved on because hell nah
Idk how some users just dont care about what theyre posting, not talking about new users necessarily, I excuse them but ppl with 1000s of observations accidentally posting 5-6 observations of random buildings and soil in between their 20-30 random unidentifiable trees (I get fried looking at it)
This is often the result of users not knowing how the site works, so it is good to leave a polite comment asking them to combine. Many users stop. If someone is uploading lots (like hundreds) of observations like this, you can flag and ask curators to intervene.
It depends. I download data from iNat for use in a curated website. I needed a way to quickly/easily mark observations for inclusion, so I stared using “reviewed” for that. In my work flow, “reviewed” means “I’m good with this observation” (it’s easier/faster than adding an ID. When you’re reviewing 10s of thousands of observations a year - ~60000 in 2025 - every second counts). For me, and probably for others doing similar work, it would be nice to be able to distinguish between observations that are ok vs needs-to-be-revisited vs I-can-ignore-this-one.
But as I said, maybe the better solution is to not do ID’s on iNat at all, and just correct observations and add annotations as needed once I’ve downloaded the data from iNat. Then I don’t have to explain the corrections, or wrangle other experts into pushing IDs in the right direction.
I do most of my ID work on small and often-cryptic taxa. In many cases (often with student observers) multiple observations of the same example are ultimately what allow an ID for one, and thus all, of the observations. When 5 people each take a single photo of the same spider, often one of those photos has the necessary diagnostic details to assign a specific ID. If that one student didn’t “get lucky” with the right angle, the other 4 observations would have sat at genus (or worse) forever. So, while it is more ID work in general, it is often helpful for IDers.
I had a little chuckle about this. I was posting some observations from a trip, and realised I was actually in one of the photos, so couldn’t possibly have taken it myself!
In my case I am so far away from the camera that my whole body is visible looking in another direction at something that is not the subject of the photo.
I understand skipping explanations, and avoiding enough @mentions to convince the CID algorithm. For your thousands.
But if you are using the obs, the observer would ultimately appreciate at least your informed ID. Or the other taxon specialists will, when they get to it in their work flow.
I should add that up until now, I have been contributing a great deal of identifications and annotations as well. My argument is that it would be nice if we had better tools for keeping track of this kind of work. That may not be enough for some folks, and they will still want to “remove” problem observations (like duplicates). That doesn’t mean we shouldn’t try to improve things.
Still, I suppose it takes a bit of guesswork (in addition to extra time and work) from you the spider identifiers, in order to “tie all the pieces together”. Intense pixel peeping to match various details of the background, checking if location/date/hour are consistent… then entering four times the same identification-by-proxy while mentioning supporting bits of evidence from this or that other obs…
…only for the next identifier to disagree with your fine IDs, owing to a stringent “not enough details visible here”
That’s my thought as well. I need to look at each observation individually, and since I always keep my ID module sorted randomly, groups usually don’t show up together like in my first example. I couldn’t know about “more photos” I am supposed to ID from, if they are not included, right… right???
If 3 out of 4 photos (posted by four people as four separate observations) don’t allow a species‑level ID, I can’t push those three any further just because the fourth one is identifiable. It’s a completely different situation when it’s clearly stated somewhere: “observed with users X, Y, Z”, with links to the related observations. But I think I have only seen one user do this before - and the 2nd user did not make his own observation.
I was genuinely excited to see several extinct bee specimens suddenly appearing in my state, only to learn they were all the same individual uploaded by different users. And I only found that out because I asked.
I fully understand the “teaching new users” explanations, especially the “we do one together and then you try on your own” approach and that’s great.
But it does feel like this is becoming more common - to run around on purpose to take ‘duplicate’ shots. (not for education, more in the “we don’t quite care” kinda motivation)
So, according to the pinned solution to the quesion: I shall skip them all together, because I will definately not ID one observation based on another.
that in turn is why I prefer to sort my IDs by - Date Observed - Ascending. I can see the duplicates, and the split across multiple obs, and ’we have finally worked out what that is’ so can leave a comment linking to the ‘better’ obs.
Its ofcourse irritating when some school-class has been out and photographed dozens of observations of the same common species and you have to try and find the more interesting ones made at the same time by someone else. But also ofcourse there is positive side that some of those students may start using this app in his daily life and become a nature enthusiast!
But its hard for identifiers if big part of the people in the world would start using this app and try to collect year of month points of species. Too much pictures of the ordinary species, propably too much poor pictures or missing information also. But, the better the AI recognition, the better classification observations have even without anyone trying to identify them, so it s not so big problem. Reseachers may then identify those observations that are on their interest, even decades from now.
BUT, I would like that if some hobbyist finds a species that has low number of observations in iNat, or atleast few that can be confirmed, then it would be positive thing to make many observations from that individual (on one or many users accounts) and ofcourse give enough information that these can be confirmed, maybe tag some experts to those observations to get them RG. Ofcourse little different pictures on each observation also.
That way the species would faster be added to AI lists and that would help to get new observations of that specie better identified in the beginning. I wonder do any experts do this because it would seem to be very helpful to be “feeding the AI”.