I’m glad this topic wasn’t about people yassifying their observations and adding them to the Cursed Nature project, which is a form of AI or image enahancement.
When you take a photo with a phone, it needs to decide which camera(s) to use for starters.
Most will take a burst (a number of frames in quick succession) and combine them to give a wider dynamic and colour range. During this process it will apply any colour filter you’ve choosen.
Higher resolution photos come from interpolative infill.
These images are definitely enhanced but not altered. There is some fuzzy logic or pattern recognition in use but not genAI.
The brand and model starts to matter when Portrait or Macro modes are selected.
Most cameras will apply some smoothing filters to Portrait but some go overboard and apply a retouch to faces and human skin.
A lot of phones, even expensive ones, have a low resolution camera for macro. Often cropping from normal mode photos gives a better result than using Macro.
I think there are deeper epistemological fault lines (fundamental differences in how we define truth) at work within the iNaturalist community. On the surface, it appears split between two value systems:
-
Procedural trust: if the image came straight out of the camera (SOOC), no matter how unclear (a 3-pixel organism), it’s treated as reliable because the process was unmediated. It seeks observable, measurable facts; iNaturalist leans this way, asking for verifiable data points. At its core, the concern is about provenance and standards of authenticity - the “purity” of the pipeline, not the prettiness of the picture. However, when you exaggerate this position to its limit, the cracks in the logic become impossible to miss; a seven-day solargraph using a pinhole in a Coke can could be considered pure, but the result is also far stranger to the human eye than a carefully adjusted DSLR image. (This is an extreme case, but it illustrates the point.) Sometimes even the methods used in the field don’t move us closer to reality; they move us away from it.
Resulting clarity: if a minor edit brings the image closer to what the eye actually saw (say, correcting poor dynamic range), it’s seen, I think, as increasing truth rather than departing from it. The human eye juggles roughly 20 stops of dynamic range; most cameras manage barely half that (depending on sensor and conditions). Lifting shadows or taming highlights often rescues reality rather than inventing it. On the other hand, other edits like even routine de-noising, though often useful, can dissolve fine detail along with noise. That doesn’t just risk a “useless” report; it can lead to under-reporting certain features.
Some say that Lightroom (LRC) and the like are a black box, but I think that’s a little misleading. It might be more accurate to say it’s a commercial black box, not a physics mystery (and the difference matters when discussing transparency). Adobe knows exactly what “Denoise vX.Y” or “Shadows +30” does; users can record those settings (XMP). The in-camera pipeline (demosaicing, tone curves, NR, sharpening, lens corrections, HDR/stacking) is largely proprietary. A “straight-out-of-camera JPEG” is already the product of hidden algorithms. In that sense, SOOC could also be considered as a black box.
Both sides of the divide believe they’re protecting the integrity of the record, but they define “truth” differently: one in method, the other in result.
To strike a reasonable balance, perhaps we could say that provenance (Procedural trust and SOOC) is the backbone of evidence; transparent edits (The clarity adherents) are the readable handwriting on it. Without the backbone, clarity is decoration; with it, clarity becomes insight. In other words, it might end up being both/and, not either/or.
Practical fix: keep the untouched original (as many others have pointed out), post the clarified version on a separate layer, and flag the tweak, as in radiology, where the original DICOM is preserved and annotations live in a separate presentation state.
Other considerations like Peer‑reviewed studies (Diversity 14(5):316) show iNaturalist data can be biased by location, uneven samples, and misidentifications - In other words the “unedited” record is already shaped by selection effects and omissions. AI is also deeply embedded: computer‑vision IDs, habitat models, focus assist, in‑camera processing. Editing is simply where we choose to notice it.
Rather than treat iNaturalist as a sacred archive of unfiltered truth, we should acknowledge that every image is a negotiation, between subject and observer, camera and eye, intention and result. This isn’t an argument for wholesale AI fakery, but for recognising that sometimes a thoughtful edit brings us closer to what was seen, and that, at times, the raw file can be the bigger lie.
I’m not suggesting we abandon standards. I’m suggesting we recognise where they already bend, and think more deeply about why we draw the lines we do. Because at the end of the day, this isn’t just about pixels; it’s about what we think knowledge is.
Nobody is arguing that no photo editing of any kind should be allowed, or that types of editing such as denoising or lightening shadows “distort” the “truth” of images in ways that are unacceptable.
The concern is not even about AI in general, but about specific types of AI (generative AI) or, more generally, with tools that process photos ways that may have little relation to what was recorded in the original (e.g., inventing or adding details that were not there).
Data biases (who observes what and where, what gets identified) are completely irrelevant to this discussion. Questions about over- or underreporting, data gaps, and representativeness of samples are completely different than faked, made-up, or unreliable data (i.e., data based on evidence that has been manipulated).
Heartfelt thanks for your epistemological insights.
However, I note this is another round of conflating various things (‘CV IDs… in-camera processing’) under the over-hyped, under-accurate word ‘AI’, no matter the mathematics at play or ultimate motive. Won’t be long till some genAI supporter of the ‘Mister Gotcha’ type comes (once more) to taunt us, “you dislike wing nervation reshuffled by genAI NeatSkin™ filters of smartphones, and yet you configure your mirroless digital camera with +2 of sharpness, I’m very clever”. Please don’t. :)
By the way, to anyone wondering, the ‘AI-powered’ (generative, ‘repairing’/denoising) features in Lightroom Classic can absolutely be avoided, they are optional and opt-in.
Incidentally, various cases of ‘cloning’ parts of photos (a very simple edit to do, with or without ‘AI’) have been reported from scientific literature, leading to retraction (see e.g. Mrs Elisabeth Bik’s reports). It’s not as if ‘image tampering’, even of the most basic kind, was already rampant and accepted.
A rather unsatisfying paper imho. And maybe there’s a pun there, but it seems the authors couldn’t even get the very first word of the title right (assess ≠ access).
Ooohhh, you’ve caught me I was that close to rolling in with my “+2 sharpness = NeatSkin™” TED Talk. But seriously, my point’s just that the OP’s “generated or enhanced” covers more than fakery, and keeping those categories distinct is the whole reason I split them in my post.
Nobody is arguing that no photo editing of any kind should be allowed, or that types of editing such as denoising or lightening shadows “distort” the “truth” of images in ways that are unacceptable.
The concern is not even about AI in general, but about specific types of AI (generative AI) or, more generally, with tools that process photos ways that may have little relation to what was recorded in the original (e.g., inventing or adding details that were not there).
Data biases (who observes what and where, what gets identified) are completely irrelevant to this discussion
The OP’s concern was “AI generated or at least heavily enhanced” images, meaning any alteration that could affect research validity, not just fabricated details. That’s why I’ve been talking about edits like NR, WB, and shadow‑lifting: they don’t invent anatomy but can still change how diagnostic traits appear. The bias/embedded‑AI point was to show these enhancements sit in a bigger context where observations are already shaped by other processes.
unfortunately the authors of that paper did not have a sound understanding of iNaturalist, as both of the frog species they assessed are auto-obscured on iNat; all of the supposed ‘erroneous’ location data was because they were treating the random, obscured coordinates as the true coordinates, so all of their results/conclusions in that regard should be discounted. I actually raised this issue with the authors when the paper was first published; they realised they messed up, and asked the journal editor to retract the paper, but the editor refused. The paper continues to be cited and used as an example of ‘bad’ iNat data.
(apologies for the off-topic comment, I just wanted to clarify because this case was a frustrating one)
I take it as “AI generated or AI enhanced”, but I’m not in the OP’s brainz. I guess that applying a gamma curve to lift sensor data out of its unusable, dark, pristine raw condition is not what was meant by ‘enhanced’.
Thanks for the heads‑up on that paper, Beachcomber, and for all the ID work you’ve done on my observations over the years. I knew a 600‑word post was bound to get me into trouble somewhere, but I figured I’d have a crack at it. If nothing else, it keeps life interesting.
My phone has the ability to remove unwanted objects/people and fill it with adjoining background: wow, looked like a pristine scene without all the people contaminating it, you couldn’t tell it was severely edited. I recently complained to a nature org that one of their photos of a monarch was phony– on many levels, wrong body language for the monarch in the wild, shadowing incorrect, pixelation inharmonious, wrong target plant… but they took the word of the photographer, and still have no caveats about AI manipulation of photographs.
We need a set of Ten Commandments, and #1 needs to be: no amalgamating photos or superimposing anything from another photo. Where’s the button for “unedited, the real thing”?
Rendering something better than real life is no longer real life.
I recently encountered an observation of a bird that was identifiable, but looked oddly textured. When asked, the observer noted that their phone “enhanced” the image using an AI photo editor. The end result is an overall look that’s not quite true to life, with some minor features that aren’t really present on the bird, and that weird soothed out look that AI often gives to photos.
I’ve heard that some newer phones automatically run some AI :upscaling” on photos now, which sometimes results in unwanted consequences like a photo of a menu with gibberish text, because the AI upscaling tried but failed to sharpen the photo.
I’m curious to hear what iNat’s policy is/will be towards AI upscaled or enhanced photos. In some cases it’s no different than a blurry photo from a digital (or even scanned in film) camera, or a photo edited the old fashion way in photoshop, but the difference is that even the observer might not know how the photo was actually changed.
Where should the line be drawn between photo editing and adding something to the photo that wasn’t actually present in real life? When ID’s sometimes are based on tiny details, it can be hard to tell what is real and what is an artifact of the low resolution of photos.
(I moved your post to this existing topic, please feel free to continue discussions)
Thanks! It seems like most of my questions were already discussed, and some policy change is hopefully in the works.
OK, this has been released: https://www.inaturalist.org/blog/118284
Thank you very much for all your work! I think this is a fair solution. I have a couple of observations in mind that need to be flagged. I’ll get out the high powered glasses and let’s see if I am right.
I actually hope I am wrong and photography has really progressed that much! Thanks again!
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.