Coming soon: Deep fake nature photography?

I’d be more impressed if the system didn’t fabricate anatomical details:
https://petapixel.com/2022/06/28/photographer-successfully-uses-dall-e-2-ai-to-edit-his-photos/

What do you think? Is this a future problem lurking for the iNat data quality handlers?

7 Likes

there is a way to investigate/question the EXIF data of a photo, however, it’s imperative to know what to look for and compare. Maybe the iNat submission script could do that, even though the software is overworked and requires tons of memory at both server and client ends?

1 Like

right now, you can upload any image you want and specify whatever location and date/time you like. how do people judge bad locations & dates? an image is only one part of a complete observation.

right now, you can do a lot of manual editing of photos if you like. how do you judge if an image has been edited or not?

there are some things that text-to-image AIs like Dall-E and Imagen, etc. will revolutionize for both the good and the bad. but i think the pressing questions are less about whether folks will submit fake iNat observations and more about who controls the AIs and how they can monetize their products. for example, suppose you ask Imagen to generate an image of a cardinal drinking from a teacup, and Imagen bases its cardinal image on 2 or 3 cardinal photos from a particular image repository that it used as its training set. does Google have to pay royalties to the original photographers of those handful of images? i don’t think current laws do a great job at specifying how licensing works when an AI is in the middle of that chain of distribution.

13 Likes

At least entomologists can note the spot pattern and thorax pattern on the AI image doesn’t match a species. This is another reason we can’t describe new species from photos going forward.

6 Likes

If it gets into standard image enhancement, like with those super-saturated pics of purple flowers that look pink floating mid-air above blur and starts bringing things into focus by adding detail, like in that link, we may have trouble coming. If it’s tagged in the EXIF data, iNat could handle it appropriately. If it’s edited in a third-party app, the tags could get stripped.

1 Like

This is equivalent to faking experiment results. Many people post under their real name and resume; who would want to wreck their whole career for the sake of a more-impressive picture?

4 Likes

I think one has to be careful here to recognize that not all pictures that look like photographs are documentary in nature. Many are stock illustrations created for a particular purpose, others are “fine art” sold to folks looking for a beautiful picture to hang on their wall at home. Digital artists and graphic designers may use multiple photographs and image manipulation plus digital textures etc. to match the vision they have in mind, similar to a painter painting from memory rather than a scene in front of them. I’ve had people ask me on Flickr whether they could use parts of my images for their digital collages, creating something that looks more like a dreamscape or fantasy world by blending bits and pieces of multiple photographs. That’s a valid form of artistic expression and I think it’s cool to see some of my images being incorporated into someone else’s artistic vision like this.

It’s just not something that should be posted to iNat, and I think the artists themselves are very aware of that. I don’t think any of them would claim their pictures to be “real” photographs. Many times when I see these types of composed or manipulated images posted to social media with some sort of sensationalist claim that they are real, the posting is done by someone else using them as click bait without the artist’s knowledge or consent, essentially committing copyright infringement on top of “fake” claims about the image. The artists themselves, at least the ones I’ve communicated with, are typically eager to point out that their image is digital art and not a “real” photo and they’re usually happy to talk at length about what software, textures, techniques etc. they used to create it.

3 Likes

Some of the high end cameras are said to have AI embeded in the chip.That will be one round of Image enhancing. I thought the Bokeh effects in some macros are rather strange. After uploading to the computer, can do another round of enhancement. Finally after uploading to iNat, the system may cut down some picture size. I’m skilled in picture grafting manually using Gimp. Probably not too difficult to doctor some images.

I think, in this case, we are assuming it will mostly be new users or those who don’t care about their online presence.

1 Like

That is super interesting, do you possibly have a link to an article on that or something?

oh I thought I read in some advertisements. Something like this link. It says Autofocus featuring AI technology somewhere in the middle part.
https://www.panasonic.com/ca/consumer/cameras-camcorders/mirrorless-cameras/lumix-gh5m2-special-features/high-quality-fundamental-performance-that-meets-professional-needs.html

Guys I found it!

7 Likes

That’s the mini version, for user testing. I don’t believe it is as advanced, as you can’t give it an image to start with.

“Autofocus featuring AI” isn’t relevant to this topic. That AI only helps with focusing on the subject in real-time, while shooting, it doesn’t alter the image in any meaningful way.

1 Like

It sounds like that is a focus-assist feature by allowing the camera to recognize if there is a human or animal in the frame to automatically focus on. A lot of modern digital cameras have features to emulate photography techniques that used to require additional hardware, e.g. tilt-shift or fish-eye lenses, lens filters, etc. Most are also capable of doing things that used to require photo editing software (or in the old days darkroom techniques), e.g. punching up the saturation or replacing colors, HDR from multiple exposures etc. It’s quite possible to use a modern digital camera as a tool to create artsy images straight out of camera. These features are there because people enjoy playing with them and like to have the in-camera shortcuts to these effects, but they are nothing new. Just the tools have changed, making them more accessible to even the layperson unfamiliar with the technical details of photography.

1 Like

I made some caterpillars to go along with your moths. Kinda fun actually. :-)


Images are fairly small size and pixelated in places, which may give them away as computer-generated. This may just be a limitation of the mini version though.
CaterpillarExample

1 Like

We had a fun one. Large white bud. Ice flower. Only blooms every thousand years.
Shock.
Awe :rofl: :joy:
but not to us, fynbos eyes see Protea cynaroides bud. Saw them on our hike yesterday.

PS another good thing about iNat is the range of images. Different photo skills. different angles. m… kay that one is photoshopped and the colour has been enhanced, but these are clearly genuine.

3 Likes

I actually have access to DALL-E 2! Though, I use it for art inspiration, personally. Rather than using the generated images directly, I use them to come up with ideas/concepts for my own work. (I do also post some generated images on Instagram, but I am always clear that they are from DALL-E 2.)

One clear indicator of whether something is generated from DALL-E 2 is the dimensions of the image, since it is limited to being a square. However, someone could easily crop the image if they so chose, I’m sure. There is also a watermark in the bottom right corner of the image, but, again, I’m sure someone could edit that out if they wanted.

In the Terms of Use for the program, there are some rules that can help keep people from using it in dishonest ways. You can find the Terms of Use here: https://labs.openai.com/policies/terms. If someone claims a generated image as their own, they could get into legal trouble. Of course, there are unsavory people who could try to get around that or straight-up lie, but hopefully most people will be honest about their usage of the program. Especially since I believe the images stay in OpenAI’s database, so surely they can find the image to prove it came from DALL-E 2 if a legal case arises.

And as mentioned before in the thread, there are experts, both in recognizing genuine photos and in identifying taxonomy and such, which could help detect dishonest images.

For fun, here’s some photos I generated with DALL-E 2 with the prompt “A photo of a new purple butterfly species, photographic, high resolution” -





6 Likes

Can you do one for true flies? :slightly_smiling_face: ( I don’t know my butterflies, so no idea how easy it is to rule these out, beyond the ones with clear artefacts )

I’ve been on the waitlist a while, super frustrating not to have access yet!
I got access to Codex and GPT3 pretty quickly so kinda confused why…

I could be wrong but in the same way iNat doesn’t take in a node without 100 images minimum, I can’t imagine an image generation model could infer much from 2-3 images
,… I mean I guess it might just recreate something almost identical to the 2-3 images, but I’d imagine rather that the model simply disregards elements with insufficient training data - in part for this reason

in the example you mention you would also have to split the royalties across a whole section of the training data…“cardinal” is just one, but there would also be images of “drinking” and images of “teacup”, etc etc… you’d likely be looking at 1000s and 1000s of inputs with no way to discern what portion came from what image for the majority of cases.

That said… I have seen with GPT3 that if you feed it a prompt from something very well known, like music lyrics or a biblical passage, it will sometimes just complete the sentence with the original text. There are similar examples and discussions for Copilot I see too:
https://www.theverge.com/2021/7/7/22561180/github-copilot-legal-copyright-fair-use-public-code
Doubtless if you start generating images of something like Mickey Mouse to use commercially, you will be treading on thin ice.

I think once these tools become readily available to the general public, it’s highly likely they will be used by naive users just as the example you share presents…even if it changes the number of spots on a ladybird! This sort of approach is already visible to some extent in the way that the really ugly image sharpening tools which destroy any detail in the photo are used by some despite the obliteration of original pattern. For new users with limited idea of how crucial minor detail can be in determining a species, they just won’t be aware of the dangers of using these sort of tools.

1 Like