Use of AI upscaling

Has that been your experience in the numerous instances in which you have upscaled arthropod photos?

To add a more concrete example: the upscaled flycatcher photo here has altered the shade of red and the concentration of the red colour in a way that is different than normal variation due to lighting. Now it looks subtly oranger with less contrast between the cheek and throat. Colour saturation is notoriously difficult to accurately depict but to me this looks different than the normal over/under saturation that can happen in photos.

Recently, I found a slightly unexpected genetic difference between two populations of bird that looked seemingly identical. There had been subtle colour variation described in the species, but the exact geographic turnover of the colour variation was uncertain, as it had to do with only very subtle differences in shades of yellow. I wanted to know whether the divide in colour variation matched the divide in genetics that I had found, but the relevant specimens were in far-away museums. Luckily, there were a bunch of photos I could look at on iNaturalist. The difference in yellow was so subtle that I wasn’t sure whether I was seeing things, so I scrambled the photos and asked some friends to categorize them and sure enough, they put them into categories that matched their geography. That’s not a thorough enough test to publish about, but at least I could now comfortably recommend the correlation between colour and genetic variation as something that needs further study in that species. The colour variation in that species is not less subtle than the colour differences between the original and upscaled flycatcher photo.

8 Likes

The photo focus stacking I have done does not induce any changes (that I know of) and I have not used upscaling. I would suspect that upscaling might introduce enough small but potentially important changes that iNaturalist should indicate that the image has been altered.

What a coincidence. The upscaling that I have done also has not induced any visible changes other than pixel dimensions. If you haven’t tried upscaling then I wonder what is the basis for your suspicion that it might modify the subject. Suspicion without evidence is not a strong foundation in my view.

I believe this is in reference to “own content”
The Topaz AI sharpening functions are 3 part: for shake reduction, focus correction, and blur removal. This function is not considered “generative” AI which adds/subtracts additional data to images, but the sharpening feature gives a more realistic finish without the halo/fringing caused by more traditional sharpening.
Community TopazLabs AI
Nov 2023 content feedback from Topaz
Quote:"You as a user have full license to sell or copyright your images and videos just as you would normally. Our AI software does not generate new works or content from scratch it only enhances what is already there in the image or video file that you put into the app. When we train our AI models we use a lot of images and videos that we have licensed or already own to train the models to know what is in an image and how to enhance it within the parameters of the task. This is different than any of the text based AI prompts like DALLE-2, Midjourney or Adobe Firefly that uses their datasets to create a new work of art based on examples it was trained on.
Our apps can be thought of like the original works of creative art made in Photoshop when it first came out in that the artwork created is that of the artist doing the work and not the software or the parent company. "

2 Likes

I think there’s a legible difference between AI upscaling (such as the kind you see in Topaz Labs) and the example posted where the photo is run through an AI and a whole separate image is “generated”. The latter should not be permitted on iNaturalist imo as arguably little of the original photo evidence of the organism remains, but I can see why this would be a bit of a debate.

10 Likes

That is so obviously a fake image of a Vermilion-ish Flycatcher. The details are wrong, the color placement is wrong, the structure of the feathers is wrong. It cannot be used to successfully identify a real bird!

I don’t see how it’s even remotely okay for this image to be attached to an an observation, much less as the first image. Why hasn’t it been removed yet?

7 Likes

Not a bird expert, I specialize in ants, but I don’t think these kind of edits are acceptable on iNat, some insects are identified by the number and length of hairs, and this editing program clearly alters the number and size of feathers and the shape of the eyes and beak, so it is almost certainly changing the identifying features in at least some taxa

2 Likes

As a curator I think it has not been removed because there is no clear policy on this, but I did leave a comment asking the observer to remove it

If this were the only image I think a DQA vote for no evidence of organism would be appropriate

3 Likes

I flagged it, which I don’t entirely understand the etiquette of, but it seems like it could fall under “other”.

I don’t want to persecute this user or anything, but this image doesn’t belong anywhere near images which might be useful to study real birds. As a birder, it’s so wrong that it sets my teeth on edge.

If this were the only image I think a DQA vote for no evidence of organism would be appropriate

Yes, the artificially-generated image can’t be identified by itself and doesn’t add any value to the observation. But the original photos are pretty good, and are plenty sufficient to make an ID.

6 Likes

AI upscaling should not be used for images in observations. My reasoning is simply that the iNaturalist data is learning material for machine learning algorithms.
We, humans, decide if the IDs proposed by the algorithms are correct.
We already have the problem where these AI IDs are blindly confirmed by humans. This is in fact AI usi G and man’s to convince itself that is right with its IDs.

If we now add raw data (observations) that is polluted by AI up-scaling, we’ll amplify the identification problems: we can’t, even experts won’t be able check an unusual ID anymore.

Apart from that, there some (sub) species have so subtle differences, that there is really no room for AI upscaling - it is almost guaranteed that the algorithm will just invent the features of a popular subspecies.

4 Likes

What I really want to do is map out different examples from Krea and Topaz and others with different levels of editting and forms of AI…as ultimately I just don’t think this is something that would be easy to gatekeep as some seem to wish… which is my point here… and why I am leaving the image in play for now - I think it raises an interesting question.
But I am travelling now, so mapping out the different AI upscalers isn’t something I have time to do right this moment.

Leaving it up is ultimately a drop in the ocean in any case…
It is clearly marked as AI and the original is included, so it does not invalidate any existing guidelines. But if people wish to keep flag intact and leave it as casual for the timebeing, then fair enough too! It´s not like it´s a crucial datapoint.

I will come back to this later with more examples showing a fuller spectrum of how different programmes upscale. I agree with others though, it does seem Topaz works in quite a different way. I imagine most users use Topaz. I had never even heard of Krea until the other week.

:heart:

4 Likes

Another approach to this matter is to start by recognizing that this is basically derivative artwork based on the original photo.

I have been known to make coloring pages based on my own bird photos, but it never occurred to me to upload those to observations, because what value would that be adding?

Even if we got to the point where there was technology that could do this undetectably, it still wouldn’t be adding any evidence to the observation. You can’t even use it to study how computers generate images of birds, because to do that you would really need to generate your own images so you could control the variables. Can we effectively stop people from adding undetectable AI images? I guess we’ll find out when the techology gets to that point, but that doesn’t mean we ought to be encouraging it.

4 Likes

As long as we’re bringing sketches into it, consider this scenario:

Someone is having an “encounter” with a certain mushroom or cactus, which results in additional “encounters” with organisms that do not exist for the rest of us. If they draw what they see, who’s to say that it isn’t a record of their encounter?

I think most of us would overrule those sketches because of a lack of “supporting evidence”; but what do we consider to be “supporting evidence”? At least the AI upscaling is based on an organism that was really present (as opposed to an AI hallucination, which is also possible). Is an AI-upscaled image no longer a record of a real encounter?

If I’m not mistaken this seems to be one of the biggest problems. We have had other threads on that very issue. If I remember right, it was mentioned in some of these threads that most of what AI can do, someone working with Photoshop can also do; the difference is that AI makes it so much faster and easier.

1 Like

I think the conceptual comparison to a sketch is an interesting one. An AI upscaled image in which the AI fills in details based upon its training dataset is essentially returning the AI’s “impression” (a sketch) of what it “thinks” the subject most likely looked like in higher definition.

With sketches, especially those not made from life, I encounter submissions which lack key field marks or have combinations of field marks/conflicting evidence about the specific identity of the organism sketched. I generally ID these to the most specific taxon (genus, family) that seems likely given the mix of/lack of traits in the sketch.

If the details of an AI-based impression/sketch are incorrect, IDers could treat it the same way - perhaps it can be IDed to genus (or some other taxon) based on the AI ouput, but if the details in the upscaled pic conflict with what the IDer expects for a given species, they could avoid IDing to that level.

1 Like

I think a crucial difference here is that the field sketch is obviously a field sketch. You know to treat it like a field sketch, subject to limitations such as the observer not knowing which details to include or not being able to draw them accurately. It’s not pretending to be a photo, which is a different type of evidence and is subject to different limitations (such as color distortion due to lighting).

An AI-produced image may not currently be capable of fooling most trained observers, but some people will be fooled, and the technology is expected to get harder to detect. So there’s no way to know what kinds of limitations should apply: is this a photo with just enhanced lighting to better match what the observer saw with their eyes? Is it a sort of stylized artistic re-imagining? Is it a best-guess reconstruction of a memory? For it to be relevant as evidence, you have to know what kind of evidence it is.

8 Likes