I do not oppose users being able to get additional information compiled by the AI. Information about identification or about general biology, whatever, assuming it’s labeled as AI. (I think concerns about computer time for generating AI would be reduced if the program stored answers so that, for example, once it’s compiled data on American Robins it can just retrieve what it compiled last time, maybe updating once a month or so.)
Why do I oppose having AI identify observations and provide additional information? You know we already have trouble with people looking at CV suggestion just and accepting it. Sometimes they even pick the first of a list of half a dozen suggestions though the CV didn’t give any one of them priority. Now, along comes AI not only giving suggestion(s) but explaining why they’re right. If the information were supplied without their explicitly asking for it, do you suppose most people would read all the information? And meaningfully evaluate it? Not often enough! I think AI ID’s with explanations would have even more credibility than the simple CV lists do now and would more often be chosen inaccurately.
Also, as an identifier I would become annoyed as hell if the computer were always throwing up reams of “useful” data about organisms I can identify and am just trying to identify; please leave me alone. I assume than any attempt to increase the AI component of iNaturalist would allow us to opt out of it (or better, to opt in only if we want it), but I can’t stress too strongly how important having options is. I mean, one reason iNaturalist is so successful is that it is (kind of) easy to use and not excessively annoying. Do you think this would be a welcoming site for me if it insisted on explaining why each American Robin observation is an American Robin?
I would especially be annoyed by AI summaries because I know there is a lot of false or out-of-date information out there for AI to skim through. When I need to look up information or pictures to compare or learn from, I have a feel for how credible the sources are – which books I should check, which websites have reliable information, which have good pictures but bad descriptions, which have a taxonomy that iNaturalist now treats as out of date, which I should use cautiously or not at all. I don’t have a idea how credible and AI compilation is because (1) I can’t know where the data comes from and (2) I can’t know what the AI has done to it. (Maybe you don’t think those are problems. Maybe you don’t work with taxonomy and identification of plants.)
Going straight to an AI explanation not only bypasses all that evaluation, it bypasses my ability to go out and figure out how credible information sources are. We all need to learn how to do such evaluation, all the more because of AI throwing a screen of credible seeming veil over everything (not just iNaturalist).
(I recently read a research article about how students actually searching through sources and writing their own reports gained useful skill that those using AI for this did not. Gee, aren’t we shocked.)
Now, I do think that identifiers’ comments on iNaturalist observations are usually accurate! But they’re uneven (many for some taxa, none for most) so I doubt an AI generated descriptions would be limited to them. But maybe.
I admit that my confidence in AI is not enhanced by the unwanted but seductively succinct AI summaries that I now get at the start of each Google search, summaries whose validity I cannot evaluate unless I already know the subject (in which case, why would I google it?) or I check other sources (in which case, why do I need the summary?).
I think we need to ask not only “How can AI be not too harmful/annoying?” but “What do we really need doing that AI can do better than what we have now without being too harmful/annoying?” and identification isn’t one of them, though compiling information (if from reliable sources) might be.
To much rant before breakfast. I should go.