Which of these did you see in the example Arctic fox picture?
One thing that can be used to check the validity of a photo is to click the Info icon for that image. Look at some of the EXIF data, particularly the camera model or other camera settings. Of course this may not work if the photo is edited or if the EXIF data has been altered but my guess is that in most cases it should help.
The attached example is just a random observation from my home screen this morning.
This morning I was looking at two observations that were amazing photos, which I had doubts about. No camera data in the âiâ details, the photos had watermarks which did not seem to match the account name (and none of their other observations had that watermark), and the alleged observation time was the middle of the night - although both were daylight photos. I did a search for other copies of those photos on the internet, but did not find them. Iâm pretty certain they were not legit observations, but I couldnât find any hard evidence. I donât speak the language of the country they were from, so didnât feel I could ask the observer anything about the photos. So I left them alone.
Theres really subtle blurring around the eyes, whiskers, and inner ear fur that doesnt really make sense.
Theres also the lighting - AI lighting tends to have a really indirect, dreamy feel to it. Look at where the light source is - or isnât. The foxâs belly is the brightest thing in the picture, which would only make sense if the light source was extremely low. Which, it could be, but think of where the sun would have to be.
This type of picture might be attainable in a photography studio, but in the wild? In the snow? With a wild animal?
Its a stretch
The midnight sun. Itâs the Arctic.
with a watermark - which doesnât match the profile, or their description - I would still flag for copyright.
Right. Which is why the lighting definitely wasnt my first point. It makes sense if the sun was very low with no tall vegetation to block it and if the animal is standing on a hill. But theres still a level of diffusion of light here that makes it feel off.
Edit: the background is dark. This either implies a clear sky or vegetation. Vegetation would mess with the low sun scenario, clear sky would make this level of light diffusion unreasonable.
Though, to be fair, some of this is accomplishable with playing with levels in lightroom
The ears/whiskers/eyes still look really wonky
EDIT 2 because I keep noticing more things: Back to the light source
Thereâs both a back/top light source and one from the bottom, and worse, the one from the bottom is bright despite the back right leg placement - the light source would basically have to be directly under the foxâs belly for that to make sense, since there isnât light coming in and illuminating the front (facing camera) section of the legs.
Diffusion of light and reflection off snow can explain some of this, but the potential reflected light on the belly is the brightest point, and thatâs the problem here - the light source makes less and less sense the more one looks at it.
EDIT 3: https://medium.com/@keithkisser/why-does-all-ai-art-look-like-that-f74e2a9e1c87 Hereâs an article talking about it, and they touch on the lighting issue a bit. AI doesnât understand light sources.
I donât know a whole lot about arctic foxes but the timing of the observation seems wrong. 4 months ago is around the end of April and start of May, when arctic foxes should begin shedding their white coat for a spring/summer brown one. But this fox has a perfect white coat on still and is surrounded by pristine, not melting snow.
Definitely a good point. Some are easier to spot than others, like that really funky beetle or those snails.
Not reliable. I have many observations Iâve cropped to square because it best centers and highlights that particular observation.
For me, the âoffnessâ of a generated image is my best assessment, but I am also unreliable!
This looks really frustrating but I donât see how to minimize it given how more casual people use the iNat appâŚ
My professors and biologist coworkers use it as basically a Seek alternative, try using Seek first and then try uploading a photo in iNat to see what the computer vision says. If they actually submit the observation, itâs often just one photo, missing date and location data, and just generally pretty low quality observations. Itâs hard for me to argue to them that they should make better observations because their goal is to ID plants ASAP, not creating clean observations for the database.
This will be even worse with casual newcomers to iNat who donât even necessarily realize that their iNat submissions go into a whole citizen science database and will be reviewed by real people. They just think itâs cool to test random personal photos, friendsâ photos, screenshots, AI images, etc. and see what the CV says. Itâs not really possible to avoid this without a really tedious introductory tutorial every time someone installs the app.
Try offering them this link as a Seek alternative. It is a Computer Vision demo that lets people submit a picture to see what CV suggests, without creating an iNat account or observation:
I donât think this is a huge problem. Itâs really only a new manifestation of an existing problem, which is uploading fake observations. Whether its an AI-generated image, or one stolen from a photographer itâs really the same issue. Iâd argue that the latter is worse for the data quality.
If the community cannot determine the difference between real and AI-generated, then what does it matter if itâs a reasonably common species? An AI-generated Mallard in an area where Mallards are found doesnât change the data enough to matter (although it may be annoying to think about depending on your personality type). For rarer species, there wonât be enough training data for the AI to reproduce it with any accuracy, so it should be easy to weed out and flag.
The picture is a fox to me. Iâve never seen one in real. AI manipulation can work in several ways. It may be something like adding colours to a black and white image. Adjusting the saturation, brightness, photo stacking. The AI presence is a selling point. It may not change the identity of the creature but enhance the overall picture. Another aspect of AI is automation. 2 decades ago, we learn to digitally superimpose the head of politicians onto bikini models. The AI today is automation. It is capable of picking from several images to create novelties which do not exist in real world. It is skillful in image manipulation and fast. Took me a week back in the day to get some artistic work done.
I hope that someone (somehow) makes some sort of tool or site that can tell ai photos using metadata or something (I donât know much about computers so Iâm just guessing) because some of these AI photos are scarily accurate. My biggest worry from all of this is that genuine observations could be flagged as AI in the future, if they are from new accounts or are a weird angle.
This image is AI from the same site as the original fox one and iâll admit it could have fooled me very easily.
That one still has a bit of a âtoo good to be trueâ quality but is overall a much better ai image, and i could really see it fooling people if they didnt know it was ai.
The shadows are actually correct and the tufts of fur in the ears actually look connected instead of just randomly floating
EDIT: the weirdest thing I can find are the pupils - theyâre thin, like theyâre slit, but theyâre rectangular on the ends instead of coming to a point.
I used ai to generate some various animals to see how accurate they could get it and the results are far from perfect, so for now, at least there is that:
Besides the fact that this is hilarious, I also noticed that the AI has problems with the finer details of things like butterflies, and also fingers (my toad dude has pretty decent fingers for ai tho):

It looks to me that rn, ai is best at mammals, which would make sense for a lot of reasons, especially if the ai program pulls images from the web to use (cat pictures are everywhere)
Since most AI uses photos from the internet (I think?) to generate photos, this got me wondering if inat obs could be used by AI. That would be concerning.
In one of my iNat training classes, I was told square is better, so now I shoot in the square setting on my iPhone if Iâm shooting for iNat. I crop almost anything else as well.
Right, but I donât think this is about auto-enhancing photos. This is about completely faked observations using prompts to create AI generated images. I think thereâs a big distinction. That fox photo isnât an enhanced photo of a real fox. Itâs a faked image.