I’d be happy if someone could name just one piece of proprietary camera equipment that is calibrated for iNaturalist work. The reality is that no camera equipment is specifically calibrated for this platform. Cameras are designed for general use and are not tailored to any specific observational application, including iNaturalist.
I am not merely equating AI manipulations with benign photographic adjustments. My argument and intent are to broaden the understanding and scrutinize all forms of image manipulation to be consistent. By that, I mean that we should not single out AI-induced artifacts as uniquely problematic when other, more conventional photographic techniques and anomalies (e.g., lens choice, focal length, rolling shutter, etc.) also significantly alter images. While AI-generated details and traditional artifacts arise from different technological foundations, they are analogous in how they potentially distort the reality captured in images; they both impact the image’s authenticity and reliability for accurate identification and analysis on platforms like iNaturalist.
As an example, there are instances in photography where details are not added but rather omitted, such as when parts of an image are under or overexposed or simply out of focus to the extent that no information in those areas of the image is recoverable. This raises a question: if a photo can be tagged as research grade (RG) on iNaturalist even though it might have missing information due to traditional flaws, why not a photo that has extra information from generative AI?
To clarify my point, iNaturalist’s criteria for achieving Research Grade status require that observations have a date, location, photos or sounds, and are of wild organisms. Additionally, the community must agree on the identification to at least the species level with a 2/3 consensus. This ensures that the data shared with scientific partners is as accurate and reliable as possible. Moreover, iNaturalist acknowledges that poor quality photos can still be useful if key diagnostic features are visible, as demonstrated by the example of a blurry and heavily cropped photo of a Wedge-tailed Eagle that still reached Research Grade due to the diagnostic shape of the tail. This example reiterates my point: if such a photo with so much missing data can be used for identification, then why not a photo altered by generative AI that adds a few feathers to a bird’s wings but retains other distinctive morphological features and accurate location data?
An earlier post highlighted the critical difference between a drawing and an AI-generated digital image, noting that a drawing is unlikely to be mistaken for a genuine photograph. This observation raises a crucial question: Given the inherent flaws and biases introduced by cameras and lenses, what exactly does a ‘genuine photograph’ look like? We can argue that all photographs are, to some extent, interpretations of reality. Just because a ‘real image’ captures light rays on a digital sensor or film, it is still subject to the interpretation and manipulation of that light through various means. So how do we define and agree upon the authenticity and accuracy of a photograph in scientific and observational platforms like iNaturalist, or anywhere else for that matter?