How should I deal with observations that are or at least appear to be AI generated? I ask because I ran into an AI image and don’t know how to flag it. Should it just be flagged as copyright infringement or be under “Other?”
No evidence of organism.
Just a small clarification that this is only appropriate if the observation does not contain any other media with evidence of an organism.
If it does, I would say the correct course of action is “evidence is related to a single subject = no”
That single subject DQA applies to multiple photos. Each photo with a different sp.
Not for a photo which includes multiple species. Where the observer must tell us WHICH sp.
I think the situation discussed was 1 AI image + 1 or more “normal” images, so there would be multiple. I don’t know whether I completely agree with the DQA vote solution, though.
IMO iNat should treat AI images similarly to copyright infringement and just hide the image. But that’s up to staff, and there may already have been a discussion about this somewhere on the forum.
If hide is your intention, then flag for copyright.
Till we have a better solution
AI images aren’t really copyright infringement tough, that doesn’t seem like a good solution.
The case being discussed is where a user has a number of legitimate media and then also one or more AI images on the observation. Therefore, there are multiple media, and not all of them depict the organism. Therefore, “evidence is related to a single subject = no”.
While not a settled legal issue of whether or not it is copyright infringement, and it may end up being country dependent, the issue is that the ‘copyright infringement’ is the flag that on a technical curation level most accurately does what we want it to do; it removes the image, and creates a flag, but doesn’t bog down the queue of regular flags that require some curator action with a flag that should never be resolved.
If staff wants to rename it ‘copyright infringment/non-original content’ or ‘copyright infringment/AI generated’ or something, I guess they could, but I don’t think that would really add any clarity. Or they could create another flag category that does exactly the same thing on a technical level but called something different, but I’m not sure if the staff would regard that as an efficient use of developer time.
I don’t support using only the DQA here, because then the photos stay in the computer vision pool and will also continue to confuse future IDers/data users every time they come across the observation.
Edit: In the meantime, please do flag all AI image uploads somehow, to help staff understand the scope of the issue and arrive at curation standards. Staff has been flagging all instances they find, I think.
Still. The AI image I would count as Homo sapiens art work and if anything ‘no evidence of life’. It is not a badger among the bee photos.
Casual images are only used for the CV if they’re flagged as captive (and not as any other type of casual). So the images will not be used in the CV at all. Therefore, I don’t see the point in flagging them at all tbh, especially when the DQA does the job just fine.
Like I said, this decision would better be left for staff. I don’t think I should make the decision to hide stuff when there isn’t a clear guideline to follow.
And I do agree with @raymie that it should be handled differently than actual “blatant” copyright infringement. (Though legally/morally, I think, it should be handled that way. And that AI shouldn’t be trained on copyrighted material without a license)
I think they should be flagged somehow, because uploading them repeatedly is a suspendable offense, regardless of whether it is technically copyright infringement:
https://www.inaturalist.org/pages/community+guidelines
Suspendable Offenses
- **(!) Machine generated observations, identifications and comments.**We do not allow machines to generate and post content on iNat with no human oversight curating each piece of content, and any account suspected of doing so is subject to suspension and the removal of the content. Read more about what constitutes machine generated content here.
Though maybe the wording of the bullet point should be amended to more specifically include AI generated images in the meaning of ‘machine generated content’
Thank you. That provides at least dome clarity.
I think this is still a somewhat grey area though. On the page about machine generated content the key aspect differentiating between allowed and disallowed behaviours/content in this regard seems to be human oversight. That in turn is however not clearly defined.
I think that page should nowadays include a clear statement regarding AI-generated images. IMO, it should be black and white “AI-generated images are allowed/aren’t allowed” (hopefully the latter).
That was in the context of a certain situation. I don’t believe I wrote it in a way that says it’s a blanket solution, but I could have been more specific.
Can you please specify? Are you saying the entire image was completely AI-generated? That’s different from real photos that have used some sort of AI-derived generative fill.
I apologize; I edited the comment to remove any suggestion you have definitely endorsed the sentiment.
I guess the only issue I can envision is whether there is a line-drawing issue with photoshop/photo enhancement vs permitted ‘sketch’-type content vs non-original content.
I think the difference from a sketch is that you can always tell if it is a hand-drawn sketch, and decide how you want to think about the evidence presented. I think the issue with imagining an AI-generated image as a category of ‘sketch’ is that it is impossible to know if the AI is hallucinating particular diagnostic features, unless the prompt used to create the image is provided in the observation comments. If the prompt is included, I might be more inclined to say it could be allowed, though I would prefer a better way of differentiating that kind of content. Also if it is conceptualized like a sketch, then we shouldn’t be voting no evidence of organism either, because we consider sketches evidence.
I do recall at least one case a while ago where a real photo was so heavily enhanced with photoshop (e.g. sharpening filters) that it led to a debate if it was AI generated or not. Though IIRC that was more a factual question about whether it was derived from a real picture than a rules issue. I think the photoshop vs AI question is basically sa question of what level of factual investigation by curators should be expected re: whether the content is allowed per the rules before curator action is taken, much in the way we often check if we can find an alternate source for a photo before ruling it copyright, e.g. using a reverse image search.
But one could argue that there was human oversight: the would-be observer made the decision whether to upload after the AI image was generated.
This is a really hard line to draw. There are AI filters that only sharpen images (as discussed in that thread). But this flycatcher image was also only “updated” from a real image but the AI introduced obvious inaccurate features. That was 5 months ago, and probably now AI could generate a more realistic image from scratch.
I’m for a flag similar to the copyright infringement one that can be used for wholly-generated AI images. It would hide the image and would not appear in the unresolved flags list by default.
The more thorny issue is when actual photos are using some sort of generative AI and where the line is drawn. And more and more photos are going to be using that kind of software, and without the photographer’s control in many smartphone situations. As I’ve said elsewhere, I think a DQA that allows people to vote on the accuracy of the evidence is probably the best solution there.