I reported the observation, hopefully it gets taken down soon. Luckily, AI can’t ever get spider anatomy right, so it wasn’t that hard to tell. I just wanted to post about it because it made me sad and wanted to see if this is a big problem in iNat circles outside of spiders.
I wouldn’t say it is a “big” problem because the volume of AI photos is currently fairly low. But I think it is a “looming” problem - it’s only going to get worse as AI gets better at generating realistic photos and more people start posting them. Hopefully we’ll have some clarity on how to deal with AI photos better soon but flagging them for curators to address is the best current approach.
Thanks for reporting it!
It’s a growing problem - seems to be a handful of users per week at the moment, but it will probably increase. Fortunately they are generally easy to spot, for the moment anyhow.
I may be naive but wouldn’t the EXIF data with each photo be a way to verify authenticity?
You can edit that data regardless of the image source, plus I often upload screen caps with none of it, taken from larger photos.
Well, that depends on who is doing the spotting. I have a photo I took of a tupelo swamp full of iron bacteria. The angle of the sun was just right to catch the iron sheen and create rainbow-colored bands covering the entire water surface. When I shared it to Nextdoor, praising the beauty of our town, the comments were from people who convinced themselves that it was AI-generated – and could even explain how they knew (“Look at the shadows”). Honestly, I was hurt that my sharing of nature’s beauty only garnered false accusations of fakery.
This is the current state of technology — where bots seem real, and real humans (and real photos) get accused of being bots, photoshopped, etc.