What is this - iNaturalist and generative AI?

So, in short you plan to go forward in some form and will ‘listen but not act’ on feedback as so far, given that ‘having a demo by the end of the year’ seems still to be the plan?

3 Likes

A lot (although of course not all) of the negative feedback was about a hypothetical full implementation rather than just a demo, or other similar assumptions that may or may not be accurate. We also don’t even know what the demo will look like yet (a couple quite different options have been suggested). So it seems premature to me to say that relevant feedback is being completely ignored without elaboration.

11 Likes

I’ve seen plenty of principled reasons to object to “generative AI” in posts, both here and elsewhere, eg https://freethoughtblogs.com/pharyngula/2025/06/30/keep-your-ai-slop-out-of-my-scientific-tools/

And if there wasn’t the hope to implement it down the line, I doubt anyone would make a demo.

3 Likes

I think about to pause all my contribution until this has settled.
I am still new, but nevertheless i hate the idea. Everything was pretty perfect. Pausing is still the most constructive idea i have becaus if it turns out to end up as I expect, i’ d have some more hundred workhours and heartblod spent into something that others then turn into nonsense . The last 20 years of my professional life run like that.
Maybe time to search for other sustainable alternatives to do what i believe needs to be done.

6 Likes

My opinion of adding generative AI is negative, but lets not throw away the good that iNaturalist does because of hypothetical future evil. iNaturalist may find ways to implement this that aren’t actually harmful to the core activities here. On this particular day optimism is hard to grasp (for reasons outside iNaturalist) but iNaturalist is and is likely to remain pretty good.

11 Likes

This may be true for the hundred or so folks here who have decided that the only acceptable “action” for iNaturalist is to bow down, turn tail, and run fast and far from any association with google grant money or AI.

But that is not the only feedback that has been received, nor is it necessarily a representative sample of the broader iNaturalist community.

7 Likes

Given the lack of transparency of course is impossible to know or verify, which is one of the big criticisms that has lead to this whole debacle, with seemingly limited willingness beyond saying ‘we listen’ and no concrete steps outlined to do better imho.

But I’ll just keep it with @anfra1969’s suggestion and will hold off from contributing until there is clarity and transparency.

2 Likes

i’m curious… if this thing is deployed to the masses, do you think your day-to-day experience using iNaturalist will change significantly? if so, what exactly do you think will change? do you think that your workflow will actually change in some way? or will you just experience some sort of dread knowing that some of the unseen inner workings of iNaturalist will have been touched by generative AI somehow (but you wouldn’t otherwise actually know that something changed unless someone told you)?

1 Like

Sure. And if thats not the plan, why do it at all? Not for the sake of nature nor better social interaction. Thats pretty clear if you read throug the feedback given.
My position: The more i use gen AI, the more i support the concept. Why I am against, i wrote earlier in the section “what can make gen AI tolerable to you”.
To the workflow:
Today, i upload/polish my find, check then the proposals from image compare.
For non-specialists in a specific field the proposals are 90+% great, as they guide me to the correct genus.

and how do you think the future workflow would be different?

(i read your other posts, but i think there’s a small language barrier that makes it hard for me to fully understand what your concerns are. so i’m asking these questions to try to clarify.)

1 Like

We wrote parallel…

Then i go into public web and search in expert forums how likely a mismatch of the proposed species is.
Often, my pictures are good enough and i keep it. Sometimes they aren’t or the species in general can’t be separated without anatomic surgery. Then i leave it at the genus level. Few of my beetles have alternatives from other genera. That’s nasty and the genus will not do it.
If i submit then my find, sometimes with linked evidence, i still need to see if other reviewers have a different oppinion.
Often new oppinions add value but sometime not, especially if my find did not match the first-in-list proposal of immage analysis.
So, already a pretty complicated workflow all in all but also astonishingly reliable.
If that’s now amended by AI text analysis of free-text comments which are unverified, I will have the additional challenge to evaluate, if the argumentation chain of the new gen AI is right or wrong. That means i spent again more time to asses likely overly convincing automated proposal input.
If that would be done and especially if i come to a different result, i’ll have to explain an defend that position against every 2nd an 3rd reviewer who may only go wit AI proposals for reason of convenience.
The same logic will apply when it comes to review finds of others. Fazit:Ill spend more time with correcting/training the AI then I collect/add/review content. Or i get blunt and just push the buttons to accept automated proposals which in itself are no knowledge and, if handled like that, will.get more worse producing more surfacial proposals. Who will then review the mess?
This is not how i will spend my free time.

It reminds me a bit to the early days of facebook. I knew people, completely isolated but having hundreds of facebook “friends” at the same time.

Or like in school, when i wrote an essay and my teacher’s comment was “nice work, but you missed the point to 100%”.

Maybe not intentionally, but somehow the idea looks like a troyan horse…

The more i learn about those modern “robotics”, the more i see a common agreement by the makers to rank Isaak Asimovs 3 laws upside-down or 1/x. If i cannot prevent the consequences, i can at least try to deny my contribution.

1 Like

thanks for elaborating on your thoughts.

the way i think about it, if you have an existing process that you like and don’t want to change, then you could just ignore additional text guidance altogether. if such text is presented by default, then if there’s also an option to disable the text, that would be a systematic solution to your concern.

based on my limited testing of various AIs’ ability to summarize identification notes that i extracted from the system for taxa i’m familiar with, the results have been surprisingly good, in my opinion. there are occasional bits of information that are imperfect but based on real users’ imperfect comments, but maybe these could be reduced by presenting only information expressed by multiple users. there are also occasional bits of information that are true but missing context, and providing links to source observations can allow folks to get that additional context.

i’m particularly impressed by the AI’s ability to surface obscure sources that other users have found helpful. for example, for one taxon, the AI surfaced observations where identifiers shared references to (1) a university thesis for the taxon, (2) an observation that contained high-quality images of 3 similar taxa side by side from various angles, (3) an observation with images where someone had circled one of the diagnostic features that would not be obvious to most folks not familiar with the taxon, and (4) an observation with an uncommon color variant. (none of these would be easily found in a standard web search.)

so, for me, if iNaturalist’s demo product can produce similarly helpful results, then i would have no issues incorporating that additional text guidance into my identification workflow, even if some of the bits of information it produced were potentially erroneous (as long as the majority of the information was good and as long as it was clear the guidance was summarized by AI from unverified community comments). i personally like having that additional insight into how other users are identifying the same taxon, and as long as it’s not presented as being authoritative, i don’t see how such guidance could be perceived as “overly convincing”.

even with disclaimers in place, it’s definitely possible that some users could take the text guidance and use it as is without doing additional research. in my mind, that’s not much different than what may happen today when new users make IDs based solely on computer vision suggestions. (the notable exception would be where users using new guidance could feed incorrect bits of information back into the system as identification notes or comments.) i suspect this kind of usage will be relatively uncommon, just as it is now. i’d be surprised if additional text guidance would create a lot of new sloppy, overconfident identifiers… although i could be wrong.

just speaking from personal experience where i’ve tried to identify taxa that i’m not familiar with, i often use a workflow similar to what you described for yourself, except that sometimes i can’t find good outside sources that i can easily understand. what i often do in these cases is i just identify to a higher rank and add a note to indicate what i think the species-level ID could be based on my limited ability to visually match reference photos of species that the computer vision had identified as a set of possibilities (and maybe also a search for additional cogener species in the area). more often than not, an expert will come along and will add a species-level identification that matches my suspicions, but it might take months or even years for that to happen. so if those experts have made explanations of how to identify the taxon suspects, and there’s something that can quickly surface that guidance, then that information would be super useful for me in my initial research. given more / better guidance to start with, it’s possible that i might be more often comfortable to identify unfamiliar taxa to a low-level, but i don’t see myself becoming any sloppier in making my identifications. (i would still check sources and look for corroborating guidance, not just take it at face value.)… but it’s possible that my way of working would not be representative of how others would work.

9 Likes

I believe we are not too far away from each other.
If we distinguish between identification and verification, i can agree that for the identification i could accept or deny CV or gen AI support and live with it.
But for verification?
That’s to my oppinion more serious and needs more dilligence because it has different and much more long-term consequences. Verification makes the difference between a photo and a scientific observation.
And the current system uses the same process for both and will not be able to keep them separate.
In jour words, i see a more “sloppy” and optionally gen AI supported process for identification as potentially tolerable.
The real issue is when it kicks into the verification process and people start to confirm finds because it matches AI “visions”. Or if it forces me to verify or falsify those visions (train the AI) prior verification.

1 Like

i just want to make sure we have the same concepts in mind. i think you are saying:

  • identification = identification by an observer (on their own observation)
  • verification = identification by someone other than the observer (not on their own observation)

is that the correct interpretation? or do you mean something else?

1 Like

Seems to me that actually this would be

identification = putting the first name on the observation (maybe by the observer, maybe by someone else).

verification = ID by somebody else.

3 Likes

True.
Identification makes it a proposed find. Proposal with or without AI influence, who cares if the human finder analysis may also be wrong.
Verification makes it then an observation, worth to be considered in all sorts of scientific evaluation. In future also worth to be considered to create more gen AI content.

The gen AI acts like a “super influencer” which educates itseIf with unverified content. It has neither university degree nor does it know the difference between data and knowledge.

If the verification step is influenced by the gen AI, i do not see how to prevent the AI getting trained by conclusions it before provoked itself. Like in a self fulfilling prophecy endless loop or a mirror cabinet.
(Or like politicians, who believe in something because they said it already before and now read it as AI summary in the internet? :wink:)

The only solution to this is the human user.
We all know what we can expect from humans and what we can’t .
Some pushbuttons will just accept the complex written content the gen AI proposes for reason of comfort and doing so do support chaos.
Some frustrated will step back one step after the other until they quit because they neither want to train AI nor want to see more of their efforts going down the drain.
Some “Don Quixote” characters may try to preserve the platform for whatever it costs to them.

INat is in a similar situation already with the CV image compare. Above problems were accepted because the gain was a better initial classification (or in above terms “identification”) to the genus.
But as this is already functioning, it can’t be made much better by gen AI.
Still, the cost for gen AI in the verification step is then to be paid (More complexity, conflict, frustration for experts). A cost for -nothing ?!

I will stop these AI discussions now or i will rename my account to “cassandra1969” :wink:. By the way, i think i remember that she was put to slavery by the Greek anyhow even though she warned the Troyans not to take the horse into the walled city.

Maybe time to check if/how i can shield my content from AI or other users nonsense (one step back).
I agree, not a very social thought, but maybe the only chance to keep me in here.

1 Like

it sounds like your concern is that folks who blindly rely on AI suggestions to identify (or verify) will push observations to research grade as the wrong taxa, and this in turn will decrease the accuracy of the model because it will be trained on bad data, resulting in a feedback loop.

while feedback loops are always potentially going to be a problem, this is a problem that already exists with the existing computer vision model. it’s not a problem that suddenly appears because we start using generative AI in some way. the way people currently handle the issue is just to do a deep dive into a problematic taxon and fix the observations there. that breaks the feedback loop.

speaking of using generative AI in some way, it’s not clear that the final product will be some sort of language model that replaces the computer vision AI model. it’s possible that generative AI will be used only to summarize text suggestions that could be added to taxon pages and supplement computer vision suggestions. if used in this limited way, the only kind of feedback loop that could occur would occur when users take AI-summarized text guidance and add that to a comment or identification note. that’s not great, but it’s not really creating a feedback loop in the way you’re describing.

it’s also not clear that many “verifications” (or confirming identifications) occur as a result of someone relying solely on a computer vision suggestion now. although i’ve definitely seen folks agree to wrong initial identifications, those agreeing identifications are not often marked with the computer vision assistance indicator. this means that the “verification” in these cases occurred either by the identifier simply clicking an agree button or by actually manually inputting their own suggestion.

also, a lot of the power identifiers who make a large portion of all identifications (and “verifications”) in the system use the web Identify screen to make most of their identifications. here, computer vision suggestions aren’t offered in the usual way, if at all. so neither computer vision suggestions nor potential generative AI suggestions should really affect their normal workflow.

certainly there are cases where folks may prefer or sometimes use interfaces where computer vision suggestions automatically appear if a user starts an identification without clicking the agree button. maybe the text in these cases affects people’s decision in these cases, but i would just guess that if the text is considered at all (rather than just ignored), it’s usually going to be used to make a more informed (not less informed) identification (or verification).

9 Likes

Really true, a lot of times AI features add a lot of bloat to apps that required less space before. Sometimes it’s very slow on older devices. To some extent it punishes anyone who doesn’t have the latest-and-greatest cellphone.

4 Likes

I think this would be helpful, like the little pop up in cornell lab’s merlin bird id.

Also could some pls explain what in the name of a pumpkin toadlet is going on? My brain literally exploded trying to understand whats happening after reading this thread. Has inat changed? How? When?

1 Like