Beware: AI Images on inat

How many people seriously know (or care about) the difference between fantasy and reality anyway? We live in a time when any science that points to a conclusion someone doesn’t like is called “flawed science.” A time when someone who is fractally wrong will carefully explain why you are the one who is fractally wrong. In such a world, are we just wasting our time even discussing mitigation?

1 Like

I would think that anyone who would go through the effort to submit an AI generated image to iNat won’t stick to just one, especially if it works the first time. If you have a suspicion that one of their records is AI, look carefully at their other records. If they do what I suspect they’ll do, then you have a legitimate case to present to iNat staff.

4 Likes

Don’t forget the holy ghost that we can see making up the complete trinity right here, right now (: Perhaps best illuminated by https://xkcd.com/386/

“Look, I can put razorblades in a (poorly re-formed) chocolate bar” isn’t proof there are other people actually putting razorblades in chocolate bars.

There’s a kind of beautiful meta-ness to the way that the argument for getting out our pitchforks has become a mirror to the problem it’s trying to stoke fear about.

It’s stopped being about the search for clarity and trying to identify the true nature of a problem and if there even is one of any practical significance - and has become a Generative AI process where people are now imagining problem scenarios from whole cloth without pointing to any actual evidence that any of the things they imagined are actually happening.

So maybe let’s all take a deep breath, and wait to see if there are any images there’s some doubt about in actual observations which are more plausibly faked than the one that’s reopened this discussion.

6 Likes

Except that your image is an obvious fake. The drop shadow doesn’t match the background.

2 Likes

It also seems to me that the discussion has been heading in the direction of imagining hypothetical harm (and demonstrating how to accomplish that). But we should probably acknowledge that this line of thinking is one we humans use a lot, even when it’s not very productive.

The upside of such speculation, when tethered to a realistic assessment of people’s motivations, is that it can allow us to identify likely problems and potential solutions ahead of time.

Looking at this from a cybersecurity perspective, some important factors are the attractiveness of the “exploit”, as well as the cost and speed of detection and response. Fortunately, I don’t think anyone has yet identified a factor that would make adding large quantities of fake images very attractive. About 200,000 observations are added to iNat each day. So far, while people could add AI generated images and maybe already have done so, it’s in very small numbers.

However, if AI image generation did become a major issue, iNat might have trouble responding quickly because making code changes is dependent on a very small group of very busy staff. I would be interested to see a broader effort to add some automated image checking tools that could catch copyright infringement or warn users when they’re re-uploading the same image. Having checks like that in place could provide a framework to add in AI detection if that ever became neceesary.

10 Likes

I can envision someone who is working in the field of AI-generated imagery using iNat as a testing ground to see how good they can make their images of organisms (which is a challenge, given the complexities of most). If they can fool the detail-oriented nature experts (iNat reviewers), that could be quite an accomplishment for them.

4 Likes

The beetle is floating! Of course, I don’t really look that closely at every beetle photo. Maybe I should?

1 Like

It couldn’t hurt :)

2 Likes

Good point—but what I wasn’t trying to say fake images were common, just that they take hardly any effort and there’s very little stopping anyone from just uploading fake images. I wouldn’t mind having the conversation being redirected to real examples and solutions though.

It’s easier to recognize a fake when it’s stated that it’s a fake. In reality, AI and other faked images on iNat would be mixed among many other real images, and a lot of times there’s very little point in looking very closely at an image.


Let’s play a related game. To somewhat replicate the diversity of possible photos, I have 3 images of a bee. One is a real image I’ve taken, one is an AI image, and one is a photoshopped image.

#1. Bumblebee on a clover:

#2. Bumblebee on Chicory:

#3. Bumblebee on a cluster of small flowers:

Can you tell which is which? If so, how did you go about it? It might help reveal some better ways to spot an AI image or possibly the shortcomings of some ways. I’ll reveal the answers soon.

1 Like

#1 is from an AI (much too inconsistent for Photoshop, except if done on purpose).
#2 is photoshopped (consider the leg ends, compare with this photo).
#3 is the real one.

???

8 Likes

I will say, iPhones (and cameras in general) can take some… “interesting” photos


original(1)
These are some images of a spider I took last year, completely unedited, but somehow the background is completely blurry while the spider is not

3 Likes

I don’t want to discourage the kind of proactive thinking that carefully explores the known problems and potential for future ones - but I do think we need to keep the extent of the problem in perspective and not let some kind of “robots will kill us all!” hysteria create the opposite problem of driving away honest users because they don’t like being accused of using witchcraft for malevolent gains when all they wanted to do was share a hastily taken photo they got of an interesting bug in the hope someone might be able to tell them more about it.

The number of “suspected AI images” creating a problem is dwarfed by the number of genuinely terrible photos which some people seem to frequently upload anyway and prompt their friends to promote to RG, even when the subject is too blurry or small to properly identify without local knowledge of what looks vaguely similar and is incredibly common.

So what problem are we actually trying to solve here? If it’s keeping “bad data” out, then “stopping AI” seems like premature optimisation for a very minor corner case. If we’re looking at it through a cybersecurity lens, what is the actual Threat Model? How many cases are slipping past the existing human review process which better automated checking might reasonably catch?

That’s an interesting line of thought - but any group operating out of a university would get railroaded by their ethics committee if they tried to do this without informed consent, let alone doing something that potentially disruptive to a multi-national citizen-science project. And any private group is going to have shareholders who will be just as concerned at the reputational damage it would do to them, let alone the possibility of it triggering the enactment of controlling legislation.

Even without all those downsides though, this is unlikely to bring them any significant benefit. They have no way of “ranking” who they fooled on a site where it’s known lots of people misidentify things, and no way of feeding that back into training their algorithm in the sort of quantities which they need to actually ‘learn’. Anyone truly wanting to do something like this is going to get better results from the kind of things they are already doing, like having “art competitions” where the judges are blind as to who or what created the submisson. And where the ‘expertness’ of the judges isn’t a wildcard that changes drastically for every candidate image they submit.

And there’s absolutely nothing stopping me from taking a perfectly real photo of a kangaroo and claiming I saw it in Antarctica - if Hardly Any Effort is the measure of how likely I am to do that.

Or if I don’t want to be instantly exposed and ridiculed, I could be more subtle, like gently extending the known range of a species and claiming Global Warming Did It. Or surrounding a mine or forest at risk of logging with multiple obs of endangered and protected species.

Unless I was doing this for a school project - because then I’d run the very real risk of the academic and permanent record consequences that falsifying work has these days…

At this stage it’s not even clear that there being very little to stop this sort of thing isn’t a feature. If doing it is a trivially dumb lie, then there’s no achievement of note to be had by doing it. But if you try and make it Hard - there’s a whole breed of people out there who are going to smile and say Challenge Accepted.

Just look at how hard you are still trying to prove you can fool the people who weren’t fooled by the original image we were asked to look at :D

3 Likes

This is ridiculous. I swear AI is just about to ruin everything.

2 Likes

1 - looks a bit off. Why is its eye so long? AI.
2 - definitely real. Maybe a little photoshopped, not sure.
3 - somethings going on with the legs I think, not sure.

I’m thinking #1 is AI

1 Like

#1 is AI, I don’t know about the others.

2 Likes

I’m sure #1 is AI generated - the anatomy of that bee is way off, wings in odd places at odd angles etc., which is typical for AI. I’m less certain which one of the other two is the photoshopped image.

This can happen if the camera moved in the same direction as the spider was moving. It’s a photography technique often used e.g. for sports photography or flying birds or other fast-moving subjects. You can find more info by searching with the keywords panning + photography online.

3 Likes

1 is a bee stuck on a faked flower
2 looks genuine - the legs don’t ‘fizzle out in midair’
3 is maybe ‘helped’ a little ?

3 Likes

Here is an example photo that I took


https://www.inaturalist.org/observations/21755029

2 Likes

Seems to only work on arachnids though :)

2 Likes

Clearly the in-camera AI was taught to be afraid of them by its parents and is target fixating in fear …

3 Likes