Given that there isn’t much information provided, I can only hope that we are given the chance to upvote/downvote style approve or contradict the AI summaries. It has a lot of potential to save time and take user comments into account but I also know that some users comment with extreme confidence incorrectly. Or what of my own comments when I explain what I’m thinking but am not necessarily correct, especially when paired with poor photos or odd locations? Curious for more info and not ready to fly off the rails about this yet.
I think that generally it is possible that the iNaturalist team is ignoring is how much bad press it will get from deep connections with “big AI”
I think many things on this article may translate to iNat
It is a sensitive topic, like ads, and should be considered as such. The team should run some kind of survey towards the feeling of users (at least on the forum) with regards to generative AI. I feel like it is somewhere between “mixed” and “overwhelmingly negative”.
By the way, on Wikipedia, we had something called the “Spanish Fork” because of an ill posed suggestion to run ads around 2002:
https://www.wired.com/story/wikipedia-spanish-fork
Not sure if an “iNat fork” can even happen, but it illustrates what ethos-breaking changes, or even appearance of so, can bring on a community.
With all due respect, this is a very unprofessional and closed-minded assertion- there are thousands, possibly millions of AI and ML algorithms in existence, including iNat’s own classification features. More than likely, the website itself is built on open-source code and frameworks from Google, Meta, etc.
Making the data gathered for this project available for R&D falls well within the mission scope, and such action could come from academia, the private sector, or even government- the intersection of all of the above is what powers the modern world. We’re in the business of making this data available, not regulating what’s done with it, and I highly doubt this is being used for nefarious purposes. Edit: I don’t see any indication Google even expects anything in return, so ignore this paragraph.
Here’s a direct link instead of a link to a reposted screenshot of a Tweet- https://blog.google/outreach-initiatives/google-org/generative-ai-accelerator-cohort-2025/. Reading this, it looks like just a brief and accurate summary of iNaturalist’s service. Sounds to me like a donation of capital and expertise we should be excited about.
This project was anticipated at the end of this blog post from last year, which was otherwise about another somewhat similar project using AI to describe images on iNat (although the AI was not trained on iNat content for that one): Search iNaturalist Photos With Text
In general iNat has been involved in AI research for a long time and I’d expect it will continue to do so, so this is another one of those tensions that are bound to come up regularly, as I think the demographic(s) that find iNat most appealing are also more likely than average to have reservations about AI.
Edit: Might be worth linking this thread as well as some overlapping subjects have already been discussed extensively there: Is the iNaturalist use of artificial intelligence damaging this planet?
Here are some contradictory studies suggesting risk and actuality of increasing model decay upon recursive training, plus other guardrail failures, from Nature (Shumailov et al. 2024) and Gehrmann et al. 2025 at arXiv. Vu et al. 2025, also in arXiv, note that while they wouldn’t use the term “collapse,” they foresee “homogenization,” aka persistent biased results based on AI models norming off each other as they feed on each other’s generated content – and imo, “all models regress to a mean based on whatever bias was present in the training sets” sounds worse to me than these things just imploding.
You may also be interested in this very recent Apple trade paper about three LLMs’ inability to handle particularly complex problems. I understand it made some waves in the general tech space. Not 100% relevant to the kind of problem iNat presents, but certainly not wholehearted support of a hallucination-free, data-uncontaminated, infinitely flexible and accurate LLM – and no one can deny Apple has a vested interest in AI succeeding.
I think rejecting the potential for hallucinations, inaccuracies, and vicious cycles is shortsighted. Inherently, LLMs don’t use consistent algorithms or explicit logic sequences, and even their developers don’t always know how they come up with their results. There’s only so much ‘fixing’ it is possible to do, and I concur with graysquirrel and franzanth that this is both potentially a real accuracy problem and an actual, current trust problem among iNat users and domain experts. Even if you don’t agree with the distrust, it exists, and it should be of concern.
To pull my own credentials here, I work with discriminative AI in an academic setting, and getting those models accurate is hard enough without the added black-box challenge of genAI. The tendency towards bias is hugely evident. I’d like to add in response to @peaceblaster on a similar note that I find it disingenuous to lump together all kinds of software which are sometimes marketed as or called “AI” in this way. “Unprofessional,” even. There are well-known historical and functional differences between these different kinds of AI, of which there are many, as you note. There are so many because they have been developed in different ways, by different people, for different functions, with different intended and unintended outcomes, and it is perfectly consistent for anyone to take different stances on different versions.
If this tool is meant to condense ID remarks into ID suggestions I think this could lead to much more misIDs than there already are on the site. I ID groups like Springtails which are miniscule and many cannot be IDed from photos. This could lead to more overconfident misIDing where there is already a ton in this group. It addition it feels like a spit in the face to our wonderful host of IDers. Personally I would love for iNat pages to have ID resources made by users for users. An AI summary just feels like a blatant disregard of the work we do and would be willing to do! Not a fan of this decision at all and would really love an in depth explanation from the iNaturalist staff.
I didn’t intend to refer to technical discussions as unprofessional, that comment was in reference to jeers such as “bullshit” or “down with AI” that appear in both OP’s post, and other comments throughout.
I work in AI as well, and like any technology, of course it’s going to have issues- I don’t think any form of information technology has been without error. I’ve QA’d several LLMs, and outside of strange use cases, they perform pretty well. My understanding here is:
- This isn’t intending to replace our existing people-powered labeling and cleaning process
- People make mistakes too, hence our peer review process
- This isn’t even changing the existing iNat classification model, it’s merely using generated text to describe its reasoning
- Google isn’t expecting any data, money, or access in return
And yes, of course there’s a whole world of AI out there, that’s the point I was making… this immediate knee-jerk reaction to the word “AI” is unwarranted and disheartening coming from an otherwise intellectual and mission-focused community. “Lumping them in together” is the exact behavior I was trying to call out.
Lastly, I don’t think we should be ideologically policing anything as iNaturalist contributors… at the end of the day, we’re making an open dataset here. If a private company with world-class expertise and capital wants to help us with that, I think that’s good news. I’d hate for this project to devolve into a bunch of forks and ideological infighting like Linux or other FOSS projects. Maybe we could add an opt-out when people post observations, or as an account-level setting?
edit: I’ll also add that ChatGPT has gone down twice in the last 24 hours lol
As a professional taxonomist, I agree 100%
Further, even if all the AI is doing is pulling together comments about identification characters for the species pages that’s still bad because there is no vetting them. There’s no way to know if those comments are made by people who know what they’re talking about or not. I trust characters on listed BugGuide because I know they’re coming from sources and people who are authorities on the subject and can double check if I need to. But if it’s AI generated, there is 0% trust that the characters are coming from authoritstive sources or not hallucinated by the AI.
Thanks for the detailed response @chestnut_pod. I think the use of the term “collapse” would be justifiable in a lay setting like this given that it has been used in academic settings (in one of the links you shared), and I stand corrected about there being no evidence of models nearing full collapse. I apologize @graysquirrel.
I do think it’s worth mentioning that none of the sources you reference contain the term hallucination or any variation of that term. I believe my point that “there have been large steps in minimizing that risk [the risk of hallucination]” stands.
I think rejecting the potential for hallucinations, inaccuracies, and vicious cycles is shortsighted.
I did not mean to make this point. My point is that there are specific ways to minimize this risk. However, all three happen without AI. I know I have “hallucinated” ID tips that I afterward cannot relocate in the literature. I know I have been inaccurate. I know I have both corrected and contributed to vicious cycles. As @pisum pointed out, if this is done right, AI could be used to correct inaccuracies and vicious cycles. There are several ways this could be done. The simplest solution is similar to Facebook’s new group chatbot feature, which identifies and collects posts that may have an answer to a user’s question. iNaturalist could also implement RAG (Retrieval Augmented Generation) and train the model on iNaturalist to overcome many of the limitations brought up so far in this discussion. I do not foresee iNaturalist’s AI integration being susceptible to the recursive data problem, especially if it is privately hosted.
While I get that distrust exists, I think we have to avoid throwing the baby out with the bathwater. Even before AI became a thing, Google was doing stuff you’d find morally disgusting. It wasn’t reasonable back then to criticize an organization for using a custom search tool powered by Google because the information in the sources could be inaccurate, may have been immorally obtained, or could be financially or otherwise supporting a tech giant with whom one disagrees. This is called the genetic fallacy, which is not an attack on your character or an insult–I’m sure you could find logical fallacies in my comments here, we aren’t logical robots.
I stand to be corrected - but - the iNat account on bluesky is from an individual - not an official iNat account @tiwane ?
Blockquote Lastly, I don’t think we should be ideologically policing anything as iNaturalist contributors… at the end of the day, we’re making an open dataset here.
I am taking this as an opportunity to share my 2 cents on this being larger than just generative AI, but about iNat’s values. Many of us think of iNat as a powerful way to deepen connections with nature and care less about the dataset, and more about the ideology. I am reading a book by Jenny Odell, How to Do Nothing: Resisting the Attention Economy, that articulates well how iNat goes far beyond being an open dataset.
At the end of the day, we are making a revolutionary change on how people (including us) interact with nature and see the world . This ties deeply with our sense of self, community and purpose. That makes it so we cannot avoid politics — but hey, politics is good! Otherwise, iNat may eventually turn in an amorphous, spineless blob, not unlike other online places we all know too well.
Resisting enshittification is way harder than it sounds, but it is vital for community/purpose/passion-driven platforms. I don’t know if adding/complying with big tech GenAI is enshittifying, but I am so thankful for @graysquirrel for raising the discussion!
The original post is on twitter, now X, from the official iNat account there. I linked to the bluesky screenshot of it because I don’t have an X account.
I think a huge problem is not informing the community what’s happening or why this is happening. That makes me ultimately more suspicious of what this will be and what ethics are going to be used.
GenAI is notoriously resource-intensive, taking water and electricity at an alarming pace. The users all are people who care about the natural world and biodiversity, threatened by extreme climate change. All exacerbated by data hogging for little gain.
I have no issue with small model AI trained with consent and for a narrow purpose. That’s being used already in the photo ID system, but we aren’t being told about what this particular deal is, what is this partnership, where and how is it being used. That’s shady, and I suspect the reason it wasn’t proudly announced was because they know how unpopular this technology is.
If this is just feeding into another GenAI fuel for profits without our consent I plan to stop using the site. Which sucks because I love this project and have recommended it to many people.
I wouldn’t call the reaction entirely kneejerk though I have definitely seen people immediately assume that this iNat-Google collab will be for image generation when the explanation referred pretty clearly to text generation, haha. But I think most of us in this forum understand this. We know this isn’t going to replace community identifications, that there’s plenty of mistakes already, and of course that taxonomic organization isn’t going to be changed. We’re upset because we feel that at best it’s unnecessary, and that at worst it will add to misinformation on the platform that people will trust implicitly. I wouldn’t be surprised if the majority of iNat users are just casually curious about what they upload and don’t question what they get in return.
Like other people said, an AI feature that pulls up observations of the same organism with other users’ ID tips could be pretty helpful. Users could read from other users and judge for themselves how accurate they are, especially since they could use that as a jumping point to do extra research on the internet. AI writing the explanations is an unnecessary middleman in my opinion. We could really just be using capital to hire sorely needed taxonomic experts to write accurate summaries instead.
I’m sure most if not all of us love iNat because it’s an open dataset that’s contributing to a wonderfully diverse pool of research. Given how education and research are being degraded through both policy and the current state of AI, minimizing this discussion to “ideologically policing… as iNaturalist contributors” is downplaying our complaints. A bunch of people here are professional educators and scientists. All of us are passionate about nature and learning, which are worth fighting for. This is getting to be a systemic concern.
BugGuide has a great system for including basic ID information in the “Info” page of each taxon. I would love to see a similar feature on iNaturalist! I’ve wanted that for years. It would be tremendously beneficial to be able to share critical ID features in a structured way, especially for understudied taxa.
…but my stomach gave out when I heard that text might be AI-generated. AI remarks have no guarantee of accuracy or accountability, and scrutinizing and correcting the remarks would double the workload on active taxonomists and identifiers like me. I would gladly offer up my time to manually write ID remarks for the taxa I study. It is sorely needed! But gen-AI will only impede accurate communication.
Regardless of how iNaturalist was planning to implement gen-AI, I am extremely disappointed in the vague way the partnership was announced. This is already a PR disaster. I urge iNaturalist staff to give clear assurance to the community that implementing gen-AI will not result in massive scientific and ethical issues.
Hey everyone, we apologize for retweeting this announcement from Google without providing more details about it. That was our mistake and I understand why it created a lot of doubts and uncertainty.
We’re working on a blog post now that will provide more details and quite a few FAQs about our work stemming from the Google.org program. When it’s ready we’ll post it to inaturalist.org/blog and we’ll link to it here.
Edited to remove “grant” as “program” is more accurate.
Edit again to change back to “grant”, see explanation here. Sorry about that!
Tangential to the topic: in case this whole GenAi thing is real, is there a way (through the web UI, or the API) to locate and void/delete all the comments accompanying one’s past IDs?
edit: to be clear, I don’t want to delete (what’s left of) my account (…yet); simply retain at least basic control (and intellectual property) on what happens to that part of the content I may have contributed.
Yes - apologies - I see iNat is active on bluesky - since I first asked about it.
i believe if you delete your account you also delete all of your IDs and comments
which would be catastrophic if enough human users leave the platform, which adding more genAI will definitely encourage
Can we expect that blog post to come soon? Today? People are - understandably - freaked out by the potential implications of this. Lots of chatter about deleting accounts, etc, which breaks my heart to see. I’m still resisting the urge to be reactionary about this, and I hope that whatever context you provide paints a less grim picture of the future of the platform. There is a way forward that doesn’t involve the spiritual end of the platform, and I hope that is the truth, here.
People are worried for the worst, and given the circumstances of, well, seemingly everything these days, I don’t think it’s an unfounded fear.
In short… please don’t let iNat become enshittified. My heart can’t take it. :(