Visually retouching images?

As to procedures that affect color balance . . . . Minimize those changes, obviously, but in my opinion they’re not so important. Looking at lots of iNaturalist photos has taught me that differences in camera and lighting and who knows what else create significant variation in colors even without deliberate photo manipulation, especially for colors on the blue/purple part of the spectrum. Even the angle of my screen has an effect. Therefore, I can’t completely trust colors on iNaturalist. That’s frustrating(!), not least because the shade of color can be important for distinguishing taxa. I still do use color as a clue, but in some subtle cases, it’s not as reliable as I would wish.

6 Likes

In our palaeo lab we had a fairly simple rule: any edited image (or reconstructed object) which, when viewed in isolation without further knowledge or context, can be mistaken for a genuine/unedited/unaltered image or object… is kind of fraudulent and should never be publicized.
That’s why we “paint” (or reconstruct) the edited parts with vivid colors and/or different/removable materials, so that any kid who sees it/finds it in the dirt/on the web 5.0 a century from now, is able to say “oh, this fossil [picture] has obviously been edited/altered/manipulated/reconstructed/improved”.

If some unsuspecting guy somehow finds your “repaired” .jpg somewhere on the internet, is it obvious (even after cropping/resizing/recompressing horribly) that it’s been photoshopped?

2 Likes

I’m curious, what do you think about forms of editing like focus stacking? It’s common in macro photography.

1 Like

I don;t see any problem with this, since it is just photography (using a sensor to map the light reflecting off an object) the problem is when the object in the photo is edited to appear as a different object

I’m not sure why this is an issue? It must be assumed that any image floating around on the internet may be photoshopped or computer generated, this is very different from a bone in the ground

1 Like

I think what you did is OK, personally. I wouldn’t make a habit of it, but in this particular case, I don’t think it did any harm.

1 Like

Sure - anything on the internet or in newspapers could be fake, so… why not create some more and/or help spread them. Nothing wrong with that.

(The point of enforcing such a policy is of course not to assess whether anything found on the internet or in the soil is a fake or not; but rather to ensure that at least those photos and data pro/amateur scientists leave behind, may retain a little more value than the average dubious stuff of unknown origin and no chance of traceability. Among habits to try and acquire, as much as possible: label stuff - add metadata to stuff - keep track of stuff - backup stuff - leave traces of who did what to/with stuff. A watermark, color overlay, post-it note, pencil mark, or other thoughtful sign… does no harm, takes no time, helps everybody. Alas, paper stickers and jpeg files sometimes survive longer than the sweet brains of their authors.)

2 Likes

Random images floating around the internet could be someones graphic design project, etc, I don’t see how the mere existence of a photoshopped clamshell image on the internet is harmful. The problem arises when it is claimed to be a specific observation on inaturalist if it doesn’t disclose the edits

Newspapers are an entirly different situation and not relevant to this discussion

2 Likes

See edit above. Disseminating poor, less-and-less-usable, or untraceable info is not limited to newspapers. It happens all the time with naturalist data, hence such thoroughly-crafted policies. It’s of course entirely possible that some cultures or schools-of-thought may not like cautionary approaches, or just not care, or share less concerns.
Still, images routinely leave iNaturalist, and - if not properly and carefully annotated - in doing so are at risk of losing information, including (not limited to) evidence of edits received. Good or bad? don’t care? it’s up to anyone to decide.

2 Likes

Playing with white balance, exposure or contrast does not introduce new information, it uses the information in the photograph. Presumably the person fiddling with the image is doing so in order to provide a more readily identifiable observation.

All of these parameters are used by a digital camera to create an image that inevitably departs in ways subtle and obvious from what the photographer perceived. They also provide the means to extract information from poor quality images. A pretty common case would be a photograph of a moving subject that was grabbed quickly, without the opportunity to change lenses or camera settings, that produced an underexposed image that was just a silouette to the naked eye. Playing with the image can reveal field marks and colour that are sufficient to make an ID, even if the end result is a little weird. This is not deception, it’s using the available information in a different way.

Adding or deleting colour or marks is a completely different matter.

7 Likes

The recent trend of supplementing image-processing software and consumer devices with ‘AI’ algorithms (e.g. intelligent noise reduction, content-adaptive resizing, texture detection and enhancement, automagic skin imperfection reduction, whatever) blurs the line -pun intended- between content enhancement and content creation. Not much of an issue with desktop software (‘Photoshop’) whereby users retain some control over what happens to images… but who can still tell exactly what in-firmware AI-processing steps and magic ‘Wow!’ filters are applied by smartphones and their apps? Devices which are deliberately kept simple and automatic (and shrouded in secrecy, and poorly-documented) in order not to annoy the average consumer…

I’ve been puzzled, recently, by the photo of a lizard uploaded to iNat; probably taken from far away with one of these modern 'smart’phones, it exhibits strange directional/texture artifacts (AI-powered digital zoom? skin beauty filters?) that do not help with ID.

4 Likes

I know you’re trying to help repeating something heard second or third-hand often just confuses things, so it’s best to cite something on iNat itself.

There isn’t any official rule on on iNat addressing photo manipulation aside from this:

Photos and sounds attached to observations should include evidence of the actual organism at the time of the observation, observed by the user who is uploading the observation. Media used in your iNaturalist observations should represent your own experiences, not just examples of something similar to what you saw.

So I guess it determines how you interpret that, but in my opinion, @earthknight said it well on where one draws the line:

Color balance, contrast, white balance, sharpening, cropping, etc all good, but I would not make changes to the actual content of the image like that. Damage is part of the observation, and sometimes things can be seen in the damaged portions that can assist in observations.

One shouldn’t alter the actual content of the image by removing or adding things aside from cropping the image. Focus stacking is fine.

Pretty much any smartphone photo was partially created using “AI” and all kind of computational photography, so I don’t think there’s any getting around it. And I personally do use AI denoisers like DXO’s. I understand there are some potential issues there, but so far I haven’t seen anything alarming as far as features being added or subtracted.

6 Likes

AI has implications for pretty much every aspect of society. So-called deep fake photographic and videographic/audiographic creations are just part of very challenging problems it presents.

But it is already possible to alter metadata in iNat submissions in ways that would severely damage at least some aspects of iNat’s efforts if they became widespread. These sorts of things, whether AI-related or the plain old vanilla issues with people claiming to have seen a rare bird far out of range in a submission to a local rare bird sightings committee have always been a potential problem for data quality in what is these days referred to as citizen science. It’s a problem in balancing risk and reward rather than a matter of eliminating the risk. There have always been and there will always be people who derive some perverse pleasure out of faking observations and they will hopefully remain a small enough proportion of the community that the risks are manageable. If that turns out to not be true then I expect that citizen science in general will be in trouble. The costs of eliminating risk would be pretty large if bad-faith actors become very common and it is doubtful that the current models would be sustainable.

It seems likely to me that the AI fakes will be motivated by other things than extracting information from crappy images. The images I’ve posted that employed those sorts of techniques are pretty obvious. The whole AI game seems to be creation of perfect fakes. I can’t think of a motivation for anybody using AI to create the sort of image you get from boosting exposure, contrast and saturation on an underexposed photo of a bird or insect.

3 Likes

You wouldn’t want CV training to be affected by a damaged organism either. I’d say careful honest retouching is likely to be better for CV training.

1 Like

The CV doesn’t actually learn distinguishing traits of organisms – it learns to recognize typical images of that organism. In that context, I think there is an argument for including images that reflect conditions found in the field, which includes specimens that are damaged and worn out. A recognition algorithm that is only trained on “perfect” photos is going to be of limited use for identifying photos taken of uncooperative subjects by non-expert photographers.

10 Likes

I thought the point of the CV was to help us ID the images we post, not to give us reasons to only post ideal images for CV training. I get the sense that there is starting to be an idea that images should only be posted if they are good for the CV, and I think that is the opposite of the intent of the CV and this website in general

6 Likes

The images chosen to train CV are deliberately from various observers and different cameras.

Yes, the model is not supposed to be trained on perfect specimens. It’s trained to recognize iNat photos of these organisms, so including various backgrounds, cameras, morphs, lighting, damage, etc., is helpful in that regard. Much better than “fixing” the photo by adding or removing content, I’d say.

4 Likes

Similarly to the way the point of iNaturalist itself has sometimes been distorted, i.e., researchers are free to use iNaturalist data, but most iNaturalist users are not here specifically to provide researchers with data. The CV is meant to be a tool, not an end goal.

3 Likes

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.