AI generated image passed the Computer Vision test

I was worried about a proliferation of AI-generated images being upload to iNat and passing as research-grade. I ran a little experiment to test this with ChatGPT. Here is the prompt I used:
I am an iNaturalist developer and I am running an experiment on our identification model. I would like you to design a picture of a realistic looking plant, that’s BASED on a real-life plant found in nature, but is in fact a wholly contrived representation that you create. This picture is intended to fool the AI model that suggests identifications. It should be realistic enough (or vague enough) that the model will give a false positive and tell the user that it is "…pretty sure it is in the Genus
ChatGPT gave me this reply:

  • That’s a fascinating experiment, sir — and quite a clever way to test the limits of iNaturalist’s identification model!

To help with this, I’ll design a realistic-looking yet subtly fictional plant that is:

  1. Based on characteristics of real genera (e.g., Euphorbia, Salvia, or Ribes),
  2. Plausible in form and color, but with just enough oddity to mislead a vision-based classifier,
  3. Visually consistent with real-world photo styles (lighting, natural background, etc.),
  4. Not identical to any known species — it may resemble several closely related ones without exactly matching any.

Let’s proceed by generating this hybrid plant image.*

It gave me this image: https://www.inaturalist.org/photos/536400014
Which iNat identified with confidence as being in the [Mint Family (Family Lamiaceae).
I did not mean anything malicious by this, it just disturbed me and felt I should report it.
I can imagine a network of bots flooding iNat with these kinds of iNat computer-vision-verified imposters. And am asking if there’s a solution for this as it would undermine all our efforts.

18 Likes

I understand your concerns; I think this is and will be a challenge!
Your image is good, but check out this “evolution” of an otter in a plane, ending in a photorealistic video:
https://www.oneusefulthing.org/p/the-recent-history-of-ai-in-32-otters

Within a short time, these photorealistic images will be widely available.

But is this a cause for concern?
So far, CV is not trained to recognise AI images (but I guess it could be). However, just as now, all pictures and videos require critical evaluation. Perhaps even more in the future, but there our human aspect comes in: We know the context, we can review the other images of the user and judge if the profile is legitimate, up to suspension if necessary. When in doubt, ask for context and downvote on “evidence of organism”. Maybe it will be easier to fake images for users, but who really wants that?

Summarised, I think it it something to be aware off, but also not scared by: Already now there is fake profiles and false images and it does not impact the platform for the average user on a normal day. I think the community can also deal with some AI-generated images!

4 Likes

The plant still looks a bit too perfect.
Some aphid, mud or bird poo would make it even more realistic. I think that’s just doable and should be set as 87% default option for similar entertainment in future.

If i would need to provide evidence that my postings are not AI generated, I could only provide digital originals - which might be AI generated too.

Maybe I write already into a 100% AI generated forum?

Our world is at the edge to turn from x% evidence-driven to 100% believe-driven.

For some “yes”. For others maybe not.

3 Likes

Just as a side note. The picture in question, is, from my point of view, identifiable to Urtica dioica, based on the visible structures (I canot see if it has stipules, and there is no inflorescence); but otherwise it is a pretty good stylisation of that species, to the pont I suspect it just took a nettle picture and slopped all over it (there are mistakes, some of the leaves in the background are for some reasone ternate).

10 Likes

These are great suggestions, but not everyone is going to be able to scrutinize images enough to tell anymore. There are AI-generated videos of CCTV footage of lions running rampant in supermarkets (example: https://www.youtube.com/watch?v=L1Q6nSALtEU), and if a person who creates a video of that quality takes a single frame of it and uploads it to iNaturalist it will set a precedent where it’s so easy to fake an observation.

And now that we have so many threads about the possibility of deepfake observations and the possibility that some users currently protesting the Google AI grant given to iNaturalist could take misplaced revenge by uploading deepfake observations en masse, I’m wondering if we should feature request an AI-generated image detector as part of the screening process before an observation is published.

The obvious solution would be this filter, but initial versions probably only deal with static images and wouldn’t be able to help with video/GIF AI checking or checking sounds for AI influences. Another solution would be implementing a more robust botnet detection software if there already is one. Does anyone else have other ideas?

1 Like

It’s not really any surprise to me that AI-generated images would fool the CV. The CV forms some kind of gestalt impression of a species based on its training data, and then when you upload an image it will always try to give you a list of possible names. As long as the image you upload looks closer to a particular species than any other species, the CV will suggest that species.

For example it guesses correctly for many field sketch observations despite the fact that any human can tell you they’re drawings rather than images of an actual organism. The CV is not aware of the existence of “drawings” or “AI-generated images”. It just knows “you give me an image, I give you names”.

See also previous discussions about AI-generated images:
Coming soon: Deep fake nature photography?
“Flora DALL·Eensis”: botanical specimens created with AI
About AI-generated images
Beware: AI Images on inat
Use of AI upscaling
AI generated videos in Wildlife field: Why they should be heavily restricted, to what, and why
AI Images and how to Flag them?
Saw an AI-generated Image on an obv today :(

21 Likes

The leaves almost look like cloth somehow upon closer inspection.

1 Like

why would anyone realistically want to do this?

iNaturalist’s built in solution is community review (and various forms of user blocking). the only other way to identify AI generated images is to examine their content credentials to see if they were generated by an AI. however, it’s relatively easy to remove image metadata. so even if there was something in place that blocked images noted as generated by AI in their metadata, it wouldn’t stop a determined person from working around the block.

8 Likes

What did you expect would happen? The CV isn’t an AI-generated image detector, it just tries to see which taxa in its model are visually similar to the photo submitted.

I’m not saying that people haven’t or will not in the future try to submit AI-generated images, or that it’s not a problem, but I don’t understand what “Computer Vision test” it passed?

17 Likes

Pettiness for one. There have been users, and from what I recall disgruntled former users who tried their best to be as disruptive as possible on their way out. The cheaper the image generation process gets, the more people will use it purely to do damage.

5 Likes

there are so many more effective ways to make a mess on the way out. generating a bunch of mages via AI seems like a super ineffective way to demonstrate pettiness.

1 Like

As a botanist, I can obviously tell that this plant doesn’t actually exist—but it is definitely deceiving.
The AI is trying to mix and match alternate leaves with whorled leaves, and where are adaxial features? Urtica artificianalis anyone?

5 Likes

Completely agree, but if we didn’t already know it was AI, would it be marked as Urtica? Other than issues with leaf structure, the varied focus (original image size) on edges of the stem/leaves make it very uncanny/suspicious, hopefully enough to raise a flag.

The lighting and even the posture of the plant on that image is very flat, one of the characteristics of basic AI attempts at realism. ChatGPT (at least the free version) is not the best image generator and even the best ones often need careful prompting, but with that or with a better program they can be made significantly more realistic than this example.

4 Likes

I definitely agree with pettiness but I also wouldnt be surprised if occurs with people seeing the image, getting no conclusive ID online, and uploading to the app with out consideration of the data.

A lot of folks use this like google lens and carry on with their life.

1 Like

Maybe, if someone was using the CV or giving broad IDs to unknowns.
If I was identifying observations of plants and it was marked as Urtica I would have flagged it for being suspicious(as AI generated). Right now there is no flag for AI-generated content, even though there should be.
I don’t think AI will ever replace humans on Inat(and it needs to be kept that way). One example I can give is—can AI generate this to species-level accuracy?

2 Likes

This is nothing fundamentally new… Before AI generated pictures, someone could have taken any picture from Google images or scan it from a book and change the metadata.
It takes as little energy than making a ChatGPT prompt.
Someone could even steal a picture from iNat and change crop and few edits.

And the CV is just doing its job.

Someone without a life and bad intentions can abuse iNat but it doesn’t mean that the whole iNat model will be in jeopardy.

9 Likes

True

Also true.

The difference comes with the ability to potentially automate the described process.
Why not just populate a whole wiki platform with hoax pictures or knowledge about species compiled from anywhere.
The difference comes also with the fact, that a pretty common toolbox was used, open to today’s public. This is no front-edge technology.
For me, tompoleski’s question is about authenticity of pictures. And how INat wants to deal with it in future.
I personally do not limit this question to pictures alone.
One thing is for sure: Once the data stock in a large database is corrupted, it is difficult to clean it and maybe more difficult to re-gain credibility.
What must be done to be prepared?

1 Like

These already exist:

When I ask an AI questions about plants, they sometimes reference those fake websites.

Îf hostile attacks become real, requiring signed photos might help. For phones there is an app “Proofmode” by the Guardian Project which creates a GPG keypair, allows to publish the public key and sign images (including metadata like GPS, phone ID, network connection (which is kind of proof that you really were at the place you claim) and maybe notary (i.e. a third party)) with the private key. Of course this depends on the OpenPGP web of trust which seems to have failed to achieve mass acceptance outside of security nerds.

The third party feature will probably be necessary, since nothing would stop an AI from creating a keypair and signing the fake images.

2 Likes