Hello,
I have recently been seeing more use of so called AI poisoning in artist circles. Many of us especially those that try to take nice pictures probably don’t want our photos being scraped by AI companies to use in their models so I have been considering using AI poisoning software such as nightshade (https://nightshade.cs.uchicago.edu/index.html) and glaze (https://glaze.cs.uchicago.edu/) to poison my images to make them unsuitable for model training.
This however poses an issue and that is that iNat uses computer vision trained on our photos. As I have understood this is not generative AI but I am still concerned that poisoned photos could effect iNats computer vision. There is another tangent and that is that perhaps our observation data will be used by actual scientists for training models that serve a purpose, poisoned images could also damage these models theortetically?
I would be interested if anyone has tested if this software is problematic for iNat’s computer vision and if anyone has any other thoughts regarding this topic, I have yet to use it but I still think its interesting! Perhaps in the future there might have to be a option to note if in image is poisoned so its not used as training data for iNat as to not cause problems. Also the implication in reaserch I think is fascinating as I think many people are happy for there photos to be used in a scientific setting but not so much to line the pockets of shareholders. Will be interested to see what peoples opinions are on this :)
Gustaf