Beware: AI Images on inat

Nah, I don’t think it’s that.

Not the world’s best argument.

This doesn’t quite explain how the shadows are in front of the organism. Presuming by the aspect ratio and quality, it looks like an iPhone, where the flash and camera are right next to each other. From this distance, no shadows should be visible at all. See these example photos I just took:





Fair. I’ll mention this. The point of focus appears to be above the ladybug, more on the black area on the left beside the glass. If that is the focus point, the ladybug should be more blurry than how it appears in the photo.

I’m going to try finding the source image.

I don’t see what these photos have to do with the original. They are completely different circumstances.

You have to account for reflection from the glass also.

5 Likes

Good point. Personally, I couldn’t replicate it exactly, but maybe that’d explain why it looks odd.



The glass is perpendicular to the surface, which would make the shadow much, much longer than in the ladybug image though.

I’m still convinced someone just used the airbrush tool in photoshop (or some other software) and painted a “convincing” looking shadow.

(I hope this discussion is still on-topic as it is still about spotting false images. If not on-topic, it may need to be moved)

1 Like

Hey all, let’s keep this topic to the broader issue of AI-generated images and not a deep dive into one image or another. I’m putting this on slow mode for the next day.

2 Likes

I was enjoying this crime scene investigation. ;-)

14 Likes

I think it is a good example of how we can be fooled…in both directions. If we are to be on the lookout for these things, we need some practice on what to look for.

4 Likes

They are only going to get better - the grail quest of producing things indistinguishable from what a human artist (using whatever tools) might make is a strong competitive driver in this field. That they can already fool as many people as they do is why “AI” came back into fashion as a marketing buzzword after being the Socks With Sandals of CS for several decades around the turn of the millennium.

The beautiful irony is, we are going to get to the point where only a White Witch another deep learning algorithm is going to be able to reliably spot the Black Hat abuses of this magic technology \o/

So in the meantime we need to be careful of not burning too many innocents at the stake - and maybe should focus more on whether a case like this one is a significantly ‘harmful’ observation (eg. the subject shows ‘abnormal’ characteristics for the supposed species, or is an outlier for where/when it was supposedly seen etc.) and less on erasing them utterly from the face of the earth as our own knight’s quest.

They are the invasive introduced weeds of inat. Finding effective methods of control requires weighing them against the additional damage that those actions might cause.

3 Likes

When assessing whether a photo is likely to have been manipulated in such a way that it no longer reflects what the observer saw, I think it is worthwhile thinking about what people using photo editing (whether using AI or photoshopping elements not in the scene) are typically trying to accomplish. Generally the desire is to create an image that looks nicer or is more impressive than what one was able to photograph oneself.

Just because tools exist that allow people to create fake images doesn’t mean that most images will be fake. The reward has to be felt to be worth the time and effort of creating the fake. Often this means that it will be something unusual or difficult to photograph.

It seems to me that it is unlikely to occur to a new user to use photoshop to create an image that is so typical of the sort of photographs taken by new users (e.g., not cropped around the focal organism; ordinary and not particularly aesthetic human-made setting instead of some natural scene; small and common organism, etc.).

6 Likes

We shouldn’t be “assessing” whether a photo has been “manipulated” in the first place. Every digital camera processes the raw data that falls onto its sensor into something that’s viewable. Every phone from the last few years at least has a neural processing unit that processes photos. Is noise reduction ok? Is sharpening ok? Is white balance correction ok? These are all things that “AI” does

1 Like

I suspect a lot of folks would disagree if by manipulated you mean created. Noise reduction is a different league than creating an image of an organism that doesn’t exist in reality at the stated location and date and passing it off as such. That’s why there’s Data Quality Assessment checklist on every observation “The Quality Grade summarizes the accuracy, precision, completeness, relevance, and appropriateness of an iNaturalist observation as biodiversity data.”

But yes, you’re correct if you only mean processed to improve clarity of the image. But that’s not what any of this is about.

3 Likes

Please note what I wrote. I did not write “manipulated”. I wrote “manipulated in such a way that it no longer reflects what the observer saw”.

(Or if you prefer, “manipulated in such a way that it no longer reflects a real encounter with an actual organism” – i.e. not noise sharpening or color adjustment, but the addition of new details that were never there in the first place.)

4 Likes

I’d just note at this point that following this line of thinking is mostly just going to rehash a very recent discussion, so anyone with the urge to add to or rebut this might first want to read the conversation either side of this: https://forum.inaturalist.org/t/use-of-ai-upscaling/52724/61

But to quickly summarise:

A lot of the confusion comes from people saying “AI” when what they really mean is image processing. The ‘AI’ distinction is meaningless - the important point is whether the image processing used (whether manual or automated by any sort of algorithm) is an operation designed to clarify the details of an image without misrepresenting it, or an act of generative art where the original data is only used as a hint for creating something Entirely Original.

If I create an accurate image of something I saw, as best I can, it’s an Observation. If I create something I only imagined, it’s Art, even if it’s based on things I’ve previously seen.

How I created those images (draw, paint, photograph) and what tools I used for that are irrelevant - what they are images of is all that matters when deciding if they qualify as a genuine observation of an existing creature.

7 Likes

In my experience with non-iNat related media, I’d say this is much more common than making a cool, eye-catching image and trying to pass it off as real.

Think about the supposed Bigfoot or UFO sightings: they’re all distant and low quality. Sure, an up-close image of a beetle is cool, but it is also very easy to prove fake. A somewhat blurry image where the subject takes up a small amount of the frame in an iPhone photo? Much harder to prove false, and much more believable for people who are unsure of what to look for. Having the subject be small also hides any sub-par photoshop work.

I don’t think the comparision with photos of UFOs and Bigfoot holds up here. Occasionally some iNat user will post a blurry photo that they are convinced is something remarkable (say, a jaguar or wolf far from the plausible range), but this is not an intent to decieve – they really believe that is what they saw. These observations generally get ID’d by the community as something more plausible (a housecat or dog) or pushed back to a higher level, so they are also unlikely to be successful.

In the logic of iNat, there is little reward for posting unidentifiable photos – most people want their observations to be verified and attractive photos get more attention than inconspicuous ones.

In cases where users upload photos that are not their own, they typically seem to choose good photos, often of organisms that are unusual or difficult to photograph. Sometimes this seems to be motivated by having seen something and been unable get a photo so they upload some other stock photo to represent what they saw. In cases of obvious AI-generated images that have come up in previous discussions, they also typically seem to be images of this sort. The primary goal does not seem to be to create fakes that cannot be detected to be fakes, but rather a simple desire to claim either the photo or the experience of having seen a particular organism as their own.

I suppose it is possible that there are dozens of fake images of common species on iNat that have merely escaped detection, but I really doubt that many people would find it worth their time to create such fakes. Again there is little reward for doing so on iNat, if the organism is easy to observe and the record neither adds to one’s species count nor is notable in any other way.

The average new user doesn’t know what an observation is supposed to look like or what is realistic. (For some, their experience of nature may have been largely limited to wildlife photos in magazines and they may not think of windowsills as a place where beetles can be found.) A user who is only posting observations because they are required to for an assignment is unlikely to put much thought into how to avoid detection; they are going to choose the fastest, easiest way to fulfill the requirement (uploading someone else’s photo, instructing an online AI program to draw a beetle). Photoshopping a an organism into a scene requires thought and effort (making sure the beetle is the right size relative to the window, etc.). Quite possibly it requires more time than finding something to photograph oneself. So if one is going put effort into faking something, most people would want to produce something more impressive than a beetle on a windowsill.

Even in cases where someone might take pleasure in tricking other users, there is little fun in faking an image that didn’t need to be faked; it is only an accomplishment if you can get others to believe something unlikely.

7 Likes

Good points.

I will say there is a phenomenon (I forgot the name, I’ll try to find it at some point) where someone who is more experienced can often be more likely to cheat and/or fake their work, because they know exactly how to cheat in order to remain believable and not get caught. This is the reason why some respected members of other communities have been revealed to be lying or faking. I’m not saying this applies to the members of iNaturalist, but it is something to think about.

This same phenomenon may apply here to iNat users using AI, but to a much lesser extent. Someone who is not-so-new to iNat may know about the species leaderboards and edit their way to “observing” even the most common of species. If a user wants a specific species, using AI or Photoshop is significantly easier than going out and trying to find it for many people.

I’ve personally never seen a live variegated ladybeetle (the species in the other image shared somewhere above) despite actively searching for them for years. I tried editing a photo of a beetle I took onto a different background, and it took less than a minute. I took the image, pasted in the beetle, and added a drop shadow, all on my phone. Much less effort than actually searching.

The process wouldn’t be too different for stolen/AI images, and it would give someone another species for their iNat profile.

3 Likes

I think weighing the likely reward against the effort of creating a fake image is going to be helpful as we try to identify strategies to detect and exclude fake images (and thereby avoid eroding identifiers’ confidence that they’re reviewing real observations of nature).

It seems the motivation of users adding fake observations mostly falls into two group: perceived Internet fame and getting a passing grade on a student project.

It’s certainly plausible that the fame-seekers have sufficient motivation to use an AI image generator or photoshop to add fake observations of rare creatures, cryptozoology and the like. I’d guess the probability of this behavior would be correlated with the notability of the organism and that might indicate the best way to combat it. We should approach every observation of an Ivory Billed Woodpecker or Spix’s Macaw with a healthy skepticism; and conversely we don’t need to exhaustively analyze every new House Sparrow or Mallard.

One reason not to increase the gamification of iNat is to avoid motivating users to create fake observations for more ordinary species simply in order to increase their species count.

For students under duress to add observations, the first thing should be to try to educate the instructor that requiring students to add observations with little guidance is actively harmful. (iNat Educator’s Guide, useful forum thread). Once students are assigned work that involves adding observations, it can be a mistake to make it too challenging. Certainly, it’s worth encouraging students to focus on wild organisms, but so long as the boundaries are not too narrow, that can still be accomplished easily in most cities (birds, insects, weeds). The more challenging the project, the more likely a few will be motivated to create fake content.

I do think it would be worthwhile for iNat to include some automated image checks to provide guidance to identifiers. Adding the following three would be very useful:

  1. Compare checksum against existing iNat images. (Detects unintentional duplicates and also image theft and sharing.)
  2. Compare using image search engines (e.g. Google Images, TinEye). Given that these are commercial providers, this may have to be via generating a link to make the comparison process easier.
  3. Compare against an AI-checker algorithm. I’d envisage that this might generate a confidence score that an identifier might use to determine whether to investigate further.
5 Likes