Beware: AI Images on inat

I wanted to make this PSA because apparently, AI images are making their way onto inat in larger numbers, and making it to RG in some cases.

If you see a photo that looks off, it may just be. Take a second look and be careful.

Link to source:


I don’t know if it is AI, but it is copyright infringement:

OK, it is AI, according to the description on this other stock image site:


the easiest tells are 1) same red flags as for copyright violation in general – eg “too good to be true” photos from a new account, watermarks; and 2) things that are “off” – too many fingers, teeth at the wrong angles, patterns that are too regular or not regular enough…


Oh dear, I always have been wondering about this in the back of my mind

Yes, fortunately AI images (at the moment) often look recognizable as not real pictures. What worries me is when developers make even better AI, and what the future of iNat will look like then.


Many AI generators create square images, while cameras and phones more often take rectangular photos. Worth looking out for.


It might be worth adding some reminders in the sign-up / upload pages that AI images are not allowed because they can corrupt the training data set. I’m worried about this.

It’s a bit interesting to me, though, that we generally would allow an illustration that represented somebody’s best reconstruction of what they saw in the field. I wouldn’t begrudge somebody using AI tools to make such an illustration, if they were transparent about it.

Perhaps we need an “illustration/reconstruction” data quality flag?


You can ask midjourney to give you images in any aspect ratio you want


Oh no!


Here’s one I saw, had the experts scratching their heads.

The overall shape is very wonky and the angle doesn’t make much sense.


Assuming that is AI generated i’d definitely get fooled by that, mark it as Harmonia axyridis, and continue going along my day IDing lol


if we are talking ethics, virtually all AI training is done on stolen information*. iNat is an exception, to my knowledge, but chatbots like GPT and art bots like midjourney take peoples’ content without their permission. they use that data for training purposes, or in rarer cases repackage it for resale.
(* or, if it’s eg Google, by using intentionally vague and deceptive terms + conditions on top of rampant theft)

their bots scrape anything and everything, and have crippled the ability of human artists and writers to make a living, between the plagiarism, sales of their content without their benefit or consent, and increased competition. plus of course the total ineffectiveness of copyright and intellectual property laws to defend any but the most monied of people and corporations.

when it comes to iNat’s data set, while curators and identifiers will be doing their best to keep order and fairness, it’s ultimately the responsibility of the people who import the data into their algorithms, or who use it for research, to double check the data.

eta: I’m not against algorithmic “intelligence” altogether, just when it’s used in an exploitative manner.


I crop the majority of my images, and as pointed out, generative AI can be usually told to choose a specific form factor.


Over time, we can expect the major generative AI systems to tag their images—this won’t be foolproof, but it should be useful. There are even ways that tags can persist across edits. The iNaturalist import routine will need to be updated once the generative AI systems take this important step.

(Microsoft and Adobe have been discussing a proposal, but I personally think it’s a wrong-headed approach.)

There is also some potential to use AI to detect AI creations, although this likely will lead to a sort of AI Arms Race. I would not anticipate the use of such a mechanism in iNaturalist as being applied to every incoming image, but it might be useful for certain categories, such as rare species.

This is a growing challenge in multiple contexts and I can’t imagine we can avoid it. If there is ever a working group on this topic, I’d be available to participate. I’m retiring shortly as an analyst at Gartner on our Cybersecurfy team.


this worries me in regards to all of the internet, really. we already couldn’t trust some things because they might be photoshopped or edited, but now we have to worry about fake video and audio made by artificial intelligence. i’ve heard plenty of songs made with the voices of people who never sang them, and some of them are scarily accurate. i’m sure it wouldn’t be difficult to create fake audio observations.


I was tagged to identify this beetle in brazil once. Original file was already deleted by author but i saved a screenshot.


Computers could have possible for content creators to be fairly compensated … but the Xanadu system that could have made it possible was never built and what we have today is worse than before computers. Ideally, everything a person types they own and can charge for it (micropayments) regardless of what work it is incorporated into. The only way that most AI systems could have been fair would have been the ability for people to earn money by micropayments whenever an AI scrapes their data. In general the AI systems (excluding iNaturalist) scare me and I certainly don’t want AI generated images on iNaturalist (unless it is an AI generated illustration, marked as so and the iNaturalist AI can exclude it from its training).

Oh, jeez… I wished this time would never come.
I hope iNat can find some form of AI detection for its images, but AI is getting better rapidly. Even these filters won’t work forever.
To all of you calling out these images from AI, you’re doing great work. I can’t imagine it’s easy, and it’s scary to see a site this important get affected by improving AI. Best of luck to all of you.


ah yes, the alusive burtle, with four legs and oddly placed antennae.


same lol

1 Like

You have to be careful about trusting your instinct that a photo’s “somehow a bit off”. As human beings, once we get a hunch, we tend to subconsciously look for ways to justify it. I once saw a study in which they showed people a mix of genuine photos and Photoshopped ones, and asked them to decide which were which. Most subjects were highly convinced of their assessments, giving detailed explanations about features in the image. But in fact the people were no better than random at distinguishing real from fake.

Case in point: just the other day I saw on Facebook a video clip of a plane flying low over a built-up area. I live near Heathrow so this is a common sight for me and I thought little of this rather mundane recording. But evidently for a lot of people, seeing a commercial plane flying so low over residential houses was surprising to the point of ridiculous, and the comments were full of hundreds of armchair detectives calling “fake!”, each with his or her own explanation of why the shadows were wrong, or the trees weren’t moving enough, or the plane was flying too slow, or too fast, or the foreground was too in focus, or too out of focus, or any number of other often-contradictory justifications.

You could pick almost any photo on iNat and, if you stare at it for long enough, you’ll spot details that don’t seem quite to make sense (either biologically or photographically); and, if you’re not careful, eventually your doubts will cloud out reason. It’s just human. For now, some AI-generated images are comparatively easy to discern, but that’s going to be less and less true over time. Unfortunately relying on instinct to pick them out is not likely to be as effective a method as many might imagine.