iNat is a good example of what now the AI can do and I think it is doing very well, often also with difficult species. But, as it could be expected, it also makes mistakes.
When the identifications proposed by users and the AI do no coincide, do you usually rely more on the first or the latter?
Humans will often have nuance and experience in IDing the rarer species and I almost always defer to a human correction.
I do not ārelyā on either. In other words, I am happy to consider suggestions from individuals or from the computer vision, but I always follow up with my own research, ādoes this ID make sense?ā In some cases, it may be beyond my ability to research subtle distinctions between IDs. In such cases, I rely on the community ID (multiple entities) rather than the computer vision or a single human identifier (single entities).
A large part of my research is developing AI to recognize marshbird vocalizations in audio files. Artificial Intelligence is based in actual intelligence. So, while I can create an AI that outperforms people with less experience in marshbird sound ID, I cannot create an AI that outperforms me because I am the one that ultimately validates the AIās ID (its knowledge is based on my knowledge). Similarly, the computer vision may outperform certain less-experienced individuals (myself included depending on the taxon), but it cannot outperform the community because the community ultimately validates the computer visionās ID.
I donāt rely on either. I try not to suggest an ID unless Iām pretty certain I know what it is. I almost never factor in the opinion of the AI, as itās often wrong. However, on rare occassions it does make me look at an ob a second time with a different perspective. Usually I just use the AI tools to save time and it depends on whether the AI has a faster selection screen or the ācompareā button does to find what Iām looking for. If neither has it, I just type it out myself. Sometimes Iāll have conversations on an ob where someone has a different ID than me (these are typically experts).
Since iNaturalist refers to the automated species identification tool as ācomputer visionā, I updated the topic title.
I use CV to find an ID and then I replace it with an upper rank taxon (until I get almost sure of if).
I rely on comments explaing why this is that species.
Humans all the way. The CV hasnāt proven useful (yet) for the taxa I am interested in. When I see a wildly misidentified critter, I can pretty much guarantee I will see the āIDād by computer visionā icon next to it.
Perhaps one day it will improve to a point where it can be trusted, but that will require a lot more data in my area, which isnāt necessarily going to magically happen.
For commonly photographed, easily visually identified taxa in well sampled areas? (Ie: birds and butterflies and dragonflies and the like in places like the US or Europe) Yeah the CV works wonders. For any taxa that arent that? Maybe it can get to like a ballpark genus or family level but you arent getting a 100% correct id without a specialist
All of the above plus my preexisting knowledge, some Google-fu, and a pile of books. Even then it is easy to get lead astray by one or the other. I would still give humans the edge but it will be very interesting to see how long that lasts.
When I do African Unknowns, there often arenāt any identifiers I can tag in. Today I was āin Ethiopiaā and checked the leaderboard for dicots there - sigh - me and a couple of others. But over the border is Kenya - so I tagged in Kenyan dicot identifiers and humans won! CV now has 8 plus 1 to work with in future. Lamiaceae - but not one you have seen before?
Humans, certainly. Although I sometimes use CV as a guide ā by reading the taxon pages that it suggests and also reviewing related/upper taxa ā I do not put much trust into it; unlike people, CV cannot actively look for subtle, specific characteristics ā crucial for many of the taxa I identify, such as Nymphalidae.
I think a long of people put way too much faith in AI. That is not to say that the CV isnāt very good or isnāt useful, because it is very good and it is useful. But the experience of an expert in any given taxon cannot be matched. While the CV is often very good at giving a rough idea of where to start to narrow down an ID or provide examples of what it might be, I do believe that is as far as the trust should go.
As a personal example, the birding community has really started using the Merlin audio ID tool to ID sounds while they are out in the field. Itās a great thing, especially for those that might not have great of hearing. One can open the app and it will suggest the birds that are calling in real time. I love birding by ear and know nearly all the songs and calls of birds in the eastern US (put me on the west coast and I will be mostly useless), and lead and participate in bird walks on a weekly basis. Occasionally someone on the walk will use the app. There have been several (perhaps many) instances where Iāll point out a bird that is calling and someone will say āMerlin says itsā (x different species)ā when I know it is not that, and it feels as though I am being challenged and that the app knows better, when Iām certain it is incorrect. These exemplify two things to me which are that some are putting too much faith in AI, and that the AI is clearly making erroneous suggestions.
Iām not saying we wonāt get to a point where the AI suggestions wonāt be 99% accurate or better, but I am saying that weāre not there yet, and even then expert opinions should probably take precedent (or several experts in some instances).
As a final footnote it should be noted that for AI/CV the quality of the input material impacts the quality of the output suggestions. A clear picture that shows the organism well, or an easily audible sound with not a lot of background will usually give a much more reliable suggestion than distant pictures or quiet sounds. There is only so much the AI/CV can do in terms of images and sound files.
That is still true in iNatās case but CV has currently over taken radiologists when it comes to things like analyzing mammograms.
Have human beings gotten to that point yet?
Depends on the human . Not me unfortunately. I think the separation there is that humans that are expert in a taxa will know when an ID canāt be refined further or know when theyāre not positive.
My post wasnāt to state that the CV isnāt very good, because it is. Itās often right. Itās very often right with good pictures. But sometimes or for some taxa itās not right as often, and with most applications it does not clearly indicate the level of certainty it has with its prediction.
Neither do humans. AI has beaten the best chess and Go players. CV is beating medical professionals in medical imaging. We really want to think we will always be better. We justify it by claiming things we havenāt actually measured. Bringing it back to topic though I still say both.
I use Computer Vision suggestions (we try to not use āAIā as itās not really artificial intelligence) as a guide, which is whatās really designed for - just suggestions that should be considered. However, that doesnāt mean I (and I imagine many other people) donāt sometimes just pick it because it seems correct. I think this is a mix of human nature and user interface that perhaps should make it easier to compare suggestions (like whatās currently in the Android app).
Iād like to see the model trained on coarser taxa and emphasize those, rather than species. I will say for many places Iāve been to recently, both in California and in other places like Hawaii and Australia, itās been quite good for plants and animals, even if I often choose to manually input a coarser ID when I make the observation.
Yes, this is pretty much my exact process too when Iām not sure where to start. Also, Iāll edit my posts to say CV. I agree it would be nice to see coarser suggestions more frequently.
Sorry, I hope my first post didnāt come off as trying to say your opinion is wrong. I agree with you that both are useful but was just meant to highlight that caution should be made about putting all of our eggs in one basket, at this point at least.
I agree. For iNat. The problem for iNat CV is that there are many, many species out there and the CV has been trained on only a few. For some species experts are still far better at giving correct IDās. Eventually though weāll get to the the point that experts mostly catch errors that the CV makes. Then weāll get to the point that the CV corrects the experts. I can see a not too distant future when iNat itself says something like āSorry kevintoo you failed to notice the hairs on the tibia of that bee. You are a moronā. Hopefully it will be more polite.
I think everybody should be aware that the CV is often wrong and should never be relied on too heavily. I think itās okay to use it still but always be open to correction and know that thereās a very real possibility it could be wrong. Iāve seen some people, usually new users who donāt know any better, appear to think the CV is all-knowing and thus were untrusting of another userās input. Thatās a worst-case scenario, I think. As long as we are open to being corrected and to learning, incorrect CV suggestions are at least manageable.