Thatâs perfectly reasonable, and I donât think anyone is demanding blind trust. Note that I was responding to a post by someone who said that generative AI will make iNat unusable for them but when asked if they would care to explain, essentially told us to shut up because it is none of our business.
My point was that it is surely a bit easier for staff to respond to concerns if they know more details about what those concerns are. Given that there has been a lot of discussion about all sorts of things it is likely that people are coming from different places with different understandings of what the situation is and different needs. Part of the conversation is going to have to be sorting out those various positions. Stating that one has a blanket mistrust of generative AI without providing more specifics isnât a great basis for figuring out where to go next.
The CV is not irrelevant. AI (both generative and non-generative) encompasses a wide range of technologies with their own different biases, problems, and risks. It is not as simple as all generative AI being inherently harmful while non-generative image recognition like the CV is fundamentally innocous. The CV is a black box. It is subject to biases resulting from choices about what to include in its training set. It is susceptible to misuse â people uncritically using it to make IDs for themselves and others â and to circular reasoning whereby bad data can, under certain circumstances, end up being fed back into the training and perpetuating misconceptions and wrong IDs. Implementing the CV required making decisions about how to manage these risks: how it should be integrated into the interface, how much freedom and power the algorithm should have, how much uncertainty should be allowed in its predictions, how user interactions with it should be framed. It is quite conceivable that a team with a different vision for iNat might have made other choices, assigned the CV a much more prominent role, weighted results in problematic ways, etc. These are some of the same things that would need to be considered before any implementation of a generative AI tool.
So if one wants to get an idea about whether staff are likely to make responsible decisions as part of the current project, looking to past decision-making and implementation of other AI tools may provide some insights into their philosophy and approach. Some may find reason for reassurance in past behavior, others may not. But since I doubt that anyone on staff knows at this point exactly what the âdemoâ will look like, I think it would be difficult at present for them to provide concrete details beyond the fact that they are exploring possibilities and taking feedback from the community about their concerns and what would be acceptable and what not.
I read most of the blog post as standard grant verbiage of the sort that is fairly normal in science and research â that is, framing plans so as to highlight elements that are key to the purpose outlined in the call for submissions; this does not necessarily correspond with how much of a role they will later play in the activities made possible by the grant. It also doesnât mean that the ideas sketched out in the grant proposal will be implemented in that exact form â the purpose of the proposal is to convince sponsors that one has ideas that are worth pursuing, but in practice, it may take a different direction as one figures out what is feasible. Given that the blog was likely written in some haste and repeats many of the ideas from the post on the Vision Language demo, I suspect that it reflects what was written in the grant proposal rather than being a well-considered presentation for the iNat community about what the plans are now that the grant has been received.
I am not a researcher but I work in academia, where the strategies for procuring funding are presumably not unlike the strategies used by non-profits. Since a lot of funding is in the form of project-based grants that cover expenses for, at most, a year or two, you are more-or-less constantly looking for the next source(s) of funding. Many grants are quite competitive, so you cast your net wide, apply for lots of different things, and hope that at least one of them is successful. Since iNat is already firmly anchored in digital technologies (app-based mobile interface) and AI development (the CV), continuing to explore the possibilities of AI is one way to help it position itself as relevant and interesting for further investments. And at the moment, generative AI is what is felt to be cutting-edge and therefore where a good portion of the money is. Nobody is going to give a grant to support the creation of a wiki with identification information. Or to help iNat develop a strategy for better user onboarding and recruiting IDers. Generally grants are based around the idea of some tangible outcome (âdeliverableâ) â this means that most grants are not simply going to be grants to cover running expenses so that iNat can continue as usual, without developing something new or producing results that wouldnât have been possible otherwise.